Show simple item record

Evaluating AI Courses

A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment

dc.contributor.authorLaupichler, Matthias Carl
dc.contributor.authorAster, Alexandra
dc.contributor.authorPerschewski, Jan-Ole
dc.contributor.authorSchleiss, Johannes
dc.date.accessioned2025-08-12T08:10:17Z
dc.date.available2025-08-12T08:10:17Z
dc.date.issued26.09.2023
dc.identifier.urihttps://hdl.handle.net/20.500.11811/13348
dc.description.abstractA growing number of courses seek to increase the basic artificial-intelligence skills ("AI literacy") of their participants. At this time, there is no valid and reliable measurement tool that can be used to assess AI-learning gains. However, the existence of such a tool would be important to enable quality assurance and comparability. In this study, a validated AI-literacy-assessment instrument, the "scale for the assessment of non-experts' AI literacy" (SNAIL) was adapted and used to evaluate an undergraduate AI course. We investigated whether the scale can be used to reliably evaluate AI courses and whether mediator variables, such as attitudes toward AI or participation in other AI courses, had an influence on learning gains. In addition to the traditional mean comparisons (i.e., t-tests), the comparative self-assessment (CSA) gain was calculated, which allowed for a more meaningful assessment of the increase in AI literacy. We found preliminary evidence that the adapted SNAIL questionnaire enables a valid evaluation of AI-learning gains. In particular, distinctions among different subconstructs and the differentiation constructs, such as attitudes toward AI, seem to be possible with the help of the SNAIL questionnaire.en
dc.format.extent12
dc.language.isoeng
dc.rightsNamensnennung 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectAI literacy
dc.subjectAI-literacy scale
dc.subjectartificial intelligence education
dc.subjectassessment
dc.subjectcourse evaluation
dc.subjectcomparative self-assessment
dc.subject.ddc370 Erziehung, Schul- und Bildungswesen
dc.subject.ddc600 Technik
dc.titleEvaluating AI Courses
dc.title.alternativeA Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment
dc.typeWissenschaftlicher Artikel
dc.publisher.nameMDPI
dc.publisher.locationBasel
dc.rights.accessRightsopenAccess
dcterms.bibliographicCitation.volume2023, vol. 13
dcterms.bibliographicCitation.issueiss. 10, 978
dcterms.bibliographicCitation.pagestart1
dcterms.bibliographicCitation.pageend12
dc.relation.doihttps://doi.org/10.3390/educsci13100978
dcterms.bibliographicCitation.journaltitleEducation Sciences
ulbbn.pubtypeZweitveröffentlichung
dc.versionpublishedVersion
ulbbn.sponsorship.oaUnifundOA-Förderung Universität Bonn


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

The following license files are associated with this item:

Namensnennung 4.0 International