Reference. Lenhard, W., Schroeders, U., & Lenhard, A. (2017). Equivalence of screen versus print reading comprehension depends on task complexity and proficiency. Discourse Processes, 54(5-6), 427–445. doi: 10.1080/0163853X.2017.1319653
Abstract. As reading and reading assessment become increasingly implemented on electronic devices, the question arises whether reading on screen is comparable with reading on paper. To examine potential differences, we studied reading processes on different proficiency and complexity levels. Specifically, we used data from the standardization sample of the German reading comprehension test ELFE II (n = 2,807), which assesses reading at word, sentence, and text level with separate speeded subtests. Children from grades 1 to 6 completed either a test version on paper or via computer under time constraints. In general, children in the screen condition worked faster but at the expense of accuracy. This difference was more pronounced for younger children and at the word level. Based on our results, we suggest that remedial education and interventions for younger children using computer-based approaches should likewise foster speed and accuracy in a balanced way.
Reference. Schroeders, U., & Wilhelm, O. (2011). Equivalence of reading and listening comprehension across test media. Educational and Psychological Measurement, 71, 849–869. doi:10.1177/0013164410391468
Abstract. Whether an ability test delivered on either paper or computer provides the same information is an important question in applied psychometrics. Besides the validity, it is also the fairness of a measure that is at stake if the test medium affects performance. This study provides a comprehensive review of existing equivalence research in the field of reading and listening comprehension in English as a foreign language and specifies factors that are likely to have an impact on equivalence. Taking into account these factors, comprehension measures were developed and tested with N = 442 high school students. Using multigroup confirmatory factor analysis, it is shown that reading and listening comprehension both were measurement invariant across test media. Nevertheless, it is argued that equivalence of data gathered on paper and computer depends on the specific measure or construct, the participants or the recruitment mechanisms, and the software and hardware realizations. Therefore, equivalence research is required for specific instantiations unless generalizable knowledge about factors affecting equivalence is available. Multigroup confirmatory factor analysis is an appropriate and effective tool for the assessment of the comparability of test scores across test media.
Reference. Schroeders, U. & Wilhelm, O. (2010). Testing reasoning ability with handheld computers, notebooks, and paper and pencil. European Journal of Psychological Assessment, 26, 284–292. doi: 10.1027/1015-5759/a000038
Abstract.Electronic devices can be used to enhance or improve cognitive ability testing. We compared three reasoning-ability measures delivered on handheld computers, notebooks, and paper-and-pencil to test whether or not the same underlying abilities were measured irrespective of the test medium. Rational item-generative principles were used to generate parallel item samples for a verbal, a numerical, and a figural reasoning test, respectively. All participants, 157 high school students, completed the three measures on each test medium. Competing measurement models were tested with confirmatory factor analyses. Results show that 2 test-medium factors for tests administrated via notebooks and handheld computers, respectively, had small to negligible loadings, and that the correlation between these factors was not substantial. Overall, test medium was not a critical source of individual differences. Perceptual and motor skills are discussed as potential causes for test-medium factors.