<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="https://hdl.handle.net/20.500.11811/12848">
<title>Institut für Medizindidaktik</title>
<link>https://hdl.handle.net/20.500.11811/12848</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="https://hdl.handle.net/20.500.11811/13554"/>
<rdf:li rdf:resource="https://hdl.handle.net/20.500.11811/13348"/>
</rdf:Seq>
</items>
<dc:date>2026-04-10T21:51:31Z</dc:date>
</channel>
<item rdf:about="https://hdl.handle.net/20.500.11811/13554">
<title>Development of the "Scale for the assessment of non-experts' AI literacy"</title>
<link>https://hdl.handle.net/20.500.11811/13554</link>
<description>Development of the "Scale for the assessment of non-experts' AI literacy"
Laupichler, Matthias Carl; Aster, Alexandra; Haverkamp, Nicolas; Raupach, Tobias
Artificial Intelligence competencies will become increasingly important in the near future. Therefore, it is essential that the AI literacy of individuals can be assessed in a valid and reliable way. This study presents the development of the "Scale for the assessment of non-experts' AI literacy" (SNAIL). An existing AI literacy item set was distributed as an online questionnaire to a heterogeneous group of non-experts (i.e., individuals without a formal AI or computer science education). Based on the data collected, an exploratory factor analysis was conducted to investigate the underlying latent factor structure. The results indicated that a three-factor model had the best model fit. The individual factors reflected AI competencies in the areas of "Technical Understanding", "Critical Appraisal", and "Practical Application". In addition, eight items from the original questionnaire were deleted based on high intercorrelations and low communalities to reduce the length of the questionnaire. The final SNAIL-questionnaire consists of 31 items that can be used to assess the AI literacy of individual non-experts or specific groups and is also designed to enable the evaluation of AI literacy courses’ teaching effectiveness.
</description>
<dc:date>2023-09-27T00:00:00Z</dc:date>
</item>
<item rdf:about="https://hdl.handle.net/20.500.11811/13348">
<title>Evaluating AI Courses</title>
<link>https://hdl.handle.net/20.500.11811/13348</link>
<description>Evaluating AI Courses
Laupichler, Matthias Carl; Aster, Alexandra; Perschewski, Jan-Ole; Schleiss, Johannes
A growing number of courses seek to increase the basic artificial-intelligence skills ("AI literacy") of their participants. At this time, there is no valid and reliable measurement tool that can be used to assess AI-learning gains. However, the existence of such a tool would be important to enable quality assurance and comparability. In this study, a validated AI-literacy-assessment instrument, the "scale for the assessment of non-experts' AI literacy" (SNAIL) was adapted and used to evaluate an undergraduate AI course. We investigated whether the scale can be used to reliably evaluate AI courses and whether mediator variables, such as attitudes toward AI or participation in other AI courses, had an influence on learning gains. In addition to the traditional mean comparisons (i.e., &lt;em&gt;t&lt;/em&gt;-tests), the comparative self-assessment (CSA) gain was calculated, which allowed for a more meaningful assessment of the increase in AI literacy. We found preliminary evidence that the adapted SNAIL questionnaire enables a valid evaluation of AI-learning gains. In particular, distinctions among different subconstructs and the differentiation constructs, such as attitudes toward AI, seem to be possible with the help of the SNAIL questionnaire.
</description>
<dc:date>2023-09-26T00:00:00Z</dc:date>
</item>
</rdf:RDF>
