Oberlader, Verena: Meta-Analyses on the Validity of Verbal Tools for Credibility Assessment. - Bonn, 2019. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-56794
@phdthesis{handle:20.500.11811/8161,
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-56794,
author = {{Verena Oberlader}},
title = {Meta-Analyses on the Validity of Verbal Tools for Credibility Assessment},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2019,
month = dec,

note = {Since ancient times, approaches to distinguish between true and deceptive statements have been of particular importance in the context of court decisions. However, the applicability of most psychophysiological or behavioral measures of deception is critically discussed. Verbal tools for credibility assessment, nonetheless, are widely used. They rest on the assumption that the quality of statements that are experience-based differs from the quality of fabricated accounts. In order to test the validity of two prominent procedures, Criteria-Based Content Analysis (CBCA) and Reality Monitoring (RM), a random-effects meta-analysis (REMA) was conducted on 52 English- and German-language studies in Meta-Analysis 1. The REMA revealed a large point estimate with moderate to large effect sizes in the confidence interval. This finding applied for both CBCA and RM, despite the fact that (1) there was a high level of heterogeneity between studies that could not be resolved by moderator analyses and, (2) it cannot be ruled out that effect size estimates are biased and thus verbal tools for credibility assessment only work to a smaller extent. However, a recent simulation study cast doubt on these findings: It showed that the meta-analytic methods used in Meta-Analysis 1 lead to false-positive rates of up to 100% if data sets are biased. To test the robustness of previous findings, a reanalysis with different bias-correcting meta-analytic methods was conducted on an updated set of 71 studies in Meta-Analysis 2. The overall effect size estimates ranged from a null effect to conventionally large effect sizes. Taking into account specific strengths and limitations of each meta-analytic method, results indicated that CBCA and RM distinguish between experience-based and fabricated statements with moderate to large effect sizes. In contrast, the Scientific Content Analysis (SCAN) – a third verbal tool for credibility assessment that was also tested in the updated data set of Meta-Analysis 2 – did not discriminate between truth and lies and should thus not be used in practice.},
url = {https://hdl.handle.net/20.500.11811/8161}
}

The following license files are associated with this item:

InCopyright