Reference. Lenhard, W., Schroeders, U., & Lenhard, A. (2017). Equivalence of screen versus print reading comprehension depends on task complexity and proficiency. Discourse Processes, 54(5-6), 427–445. doi: 10.1080/0163853X.2017.1319653
Abstract. As reading and reading assessment become increasingly implemented on electronic devices, the question arises whether reading on screen is comparable with reading on paper. To examine potential differences, we studied reading processes on different proficiency and complexity levels. Specifically, we used data from the standardization sample of the German reading comprehension test ELFE II (n = 2,807), which assesses reading at word, sentence, and text level with separate speeded subtests. Children from grades 1 to 6 completed either a test version on paper or via computer under time constraints. In general, children in the screen condition worked faster but at the expense of accuracy. This difference was more pronounced for younger children and at the word level. Based on our results, we suggest that remedial education and interventions for younger children using computer-based approaches should likewise foster speed and accuracy in a balanced way.
Reference. Schroeders, U., Bucholtz, N., Formazin, M., & Wilhelm, O. (2013). Modality specificity of comprehension abilities in the sciences. European Journal of Psychological Assessment, 29, 3–11. doi:10.1027/1015-5759/a000114
Abstract. The measurement of science achievement is often unnecessarily restricted to the presentation of reading comprehension items that are sometimes enriched with graphs, tables, and figures. In a newly developed viewing comprehension task, participants watched short videos covering different science topics and were subsequently asked several multiple-choice comprehension questions. Research questions were whether viewing comprehension (1) can be measured adequately, (2) is perfectly collinear with reading comprehension, and (3) can be regarded as a linear function of reasoning and acquired knowledge. High-school students (N = 216) worked on a paperbased reading comprehension task, a viewing comprehension task delivered on handheld devices, a sciences knowledge test, and three fluid intelligence measures. The data show that, first, the new viewing comprehension test worked psychometrically fine; second, performance in both comprehension tasks was essentially perfectly collinear; third, fluid intelligence and domain-specific knowledge fully accounted for the ability to comprehend texts and videos. We conclude that neither test medium (paper-pencil versus handheld device) nor test modality (reading versus viewing) are decisive for comprehension ability in the natural sciences. Fluid intelligence and, even more strongly, domain-specific knowledge turned out to be exhaustive predictors of comprehension performance.
Reference. Schroeders, U., & Wilhelm, O. (2011). Equivalence of reading and listening comprehension across test media. Educational and Psychological Measurement, 71, 849–869. doi:10.1177/0013164410391468
Abstract. Whether an ability test delivered on either paper or computer provides the same information is an important question in applied psychometrics. Besides the validity, it is also the fairness of a measure that is at stake if the test medium affects performance. This study provides a comprehensive review of existing equivalence research in the field of reading and listening comprehension in English as a foreign language and specifies factors that are likely to have an impact on equivalence. Taking into account these factors, comprehension measures were developed and tested with N = 442 high school students. Using multigroup confirmatory factor analysis, it is shown that reading and listening comprehension both were measurement invariant across test media. Nevertheless, it is argued that equivalence of data gathered on paper and computer depends on the specific measure or construct, the participants or the recruitment mechanisms, and the software and hardware realizations. Therefore, equivalence research is required for specific instantiations unless generalizable knowledge about factors affecting equivalence is available. Multigroup confirmatory factor analysis is an appropriate and effective tool for the assessment of the comparability of test scores across test media.
References. Schroeders, U., Wilhelm, O., & Bucholtz, N. (2010). Reading, listening, and viewing comprehension in English as a foreign language: One or more constructs? Intelligence, 38, 562–573. doi: 10.1016/j.intell.2010.09.003
Abstract. Receptive foreign language proficiency is usually measured with reading and listening comprehension tasks. A novel approach to assess such proficiencies – viewing comprehension – is based on the presentation of short instructional videos followed by one ormore comprehension questions concerning the preceding video stimulus. In order to evaluate a newly developed viewing comprehension test 485 German high school students completed reading, listening, and viewing comprehension tests, all measuring the receptive proficiency in English as a foreign language. Fluid and crystallized intelligencewere measured as predictors of performance. Relative to traditional comprehension tasks, the viewing comprehension task has similar psychometric qualities. The three comprehension tests are very highly but not perfectly correlated with each other. Relations with fluid and crystallized intelligence show systematic differences between the three comprehension tasks. The high overlap between foreign language comprehensionmeasures and between crystallized intelligence and language comprehension ability can be taken as support for a uni-dimensional interpretation. Implications for the assessment of language proficiency are discussed.