Show simple item record

Meta-Analyses on the Validity of Verbal Tools for Credibility Assessment

dc.contributor.advisorBanse, Rainer
dc.contributor.authorOberlader, Verena
dc.date.accessioned2020-04-27T07:13:43Z
dc.date.available2020-04-27T07:13:43Z
dc.date.issued06.12.2019
dc.identifier.urihttps://hdl.handle.net/20.500.11811/8161
dc.description.abstractSince ancient times, approaches to distinguish between true and deceptive statements have been of particular importance in the context of court decisions. However, the applicability of most psychophysiological or behavioral measures of deception is critically discussed. Verbal tools for credibility assessment, nonetheless, are widely used. They rest on the assumption that the quality of statements that are experience-based differs from the quality of fabricated accounts. In order to test the validity of two prominent procedures, Criteria-Based Content Analysis (CBCA) and Reality Monitoring (RM), a random-effects meta-analysis (REMA) was conducted on 52 English- and German-language studies in Meta-Analysis 1. The REMA revealed a large point estimate with moderate to large effect sizes in the confidence interval. This finding applied for both CBCA and RM, despite the fact that (1) there was a high level of heterogeneity between studies that could not be resolved by moderator analyses and, (2) it cannot be ruled out that effect size estimates are biased and thus verbal tools for credibility assessment only work to a smaller extent. However, a recent simulation study cast doubt on these findings: It showed that the meta-analytic methods used in Meta-Analysis 1 lead to false-positive rates of up to 100% if data sets are biased. To test the robustness of previous findings, a reanalysis with different bias-correcting meta-analytic methods was conducted on an updated set of 71 studies in Meta-Analysis 2. The overall effect size estimates ranged from a null effect to conventionally large effect sizes. Taking into account specific strengths and limitations of each meta-analytic method, results indicated that CBCA and RM distinguish between experience-based and fabricated statements with moderate to large effect sizes. In contrast, the Scientific Content Analysis (SCAN) – a third verbal tool for credibility assessment that was also tested in the updated data set of Meta-Analysis 2 – did not discriminate between truth and lies and should thus not be used in practice.
dc.language.isoeng
dc.rightsIn Copyright
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectInhaltsanalytische Glaubhaftigkeitsbeurteilung
dc.subjectCBCA
dc.subjectRM
dc.subjectSCAN
dc.subjectMeta-Analyse
dc.subjectcontent-based credibilty assessment
dc.subjectmeta-analysis
dc.subject.ddc150 Psychologie
dc.titleMeta-Analyses on the Validity of Verbal Tools for Credibility Assessment
dc.typeDissertation oder Habilitation
dc.publisher.nameUniversitäts- und Landesbibliothek Bonn
dc.publisher.locationBonn
dc.rights.accessRightsopenAccess
dc.identifier.urnhttps://nbn-resolving.org/urn:nbn:de:hbz:5-56794
ulbbn.pubtypeErstveröffentlichung
ulbbnediss.affiliation.nameRheinische Friedrich-Wilhelms-Universität Bonn
ulbbnediss.affiliation.locationBonn
ulbbnediss.thesis.levelDissertation
ulbbnediss.dissID5679
ulbbnediss.date.accepted13.09.2019
ulbbnediss.institutePhilosophische Fakultät : Institut für Psychologie
ulbbnediss.fakultaetPhilosophische Fakultät
dc.contributor.coRefereeVolbert, Renate


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

The following license files are associated with this item:

InCopyright