Reference. Lenhard, W., Schroeders, U., & Lenhard, A. (2017). Equivalence of screen versus print reading comprehension depends on task complexity and proficiency. Discourse Processes, 54(5-6), 427–445. doi: 10.1080/0163853X.2017.1319653
Abstract. As reading and reading assessment become increasingly implemented on electronic devices, the question arises whether reading on screen is comparable with reading on paper. To examine potential differences, we studied reading processes on different proficiency and complexity levels. Specifically, we used data from the standardization sample of the German reading comprehension test ELFE II (n = 2,807), which assesses reading at word, sentence, and text level with separate speeded subtests. Children from grades 1 to 6 completed either a test version on paper or via computer under time constraints. In general, children in the screen condition worked faster but at the expense of accuracy. This difference was more pronounced for younger children and at the word level. Based on our results, we suggest that remedial education and interventions for younger children using computer-based approaches should likewise foster speed and accuracy in a balanced way.
Reference. Moehring, A., Schroeders, U., Leichtmann, B., & Wilhelm, O. (2016). Ecological momentary assessment of digital literacy: Influence of fluid and crystallized intelligence, domain-specific knowledge, and computer usage. Intelligence, 59, 170–180. http://dx.doi.org/10.1016/j.intell.2016.10.003
Abstract. The ability to comprehend new information is closely related to the successful acquisition of new knowledge. With the ubiquitous availability of the Internet, the procurement of information online constitutes a key aspect in education, work, and our leisure time. In order to investigate individual differences in digital literacy, test takers were presented with health-related comprehension problems with task-specific time restrictions. Instead of reading a given text, they were instructed to search the Internet for the information required to answer the questions. We investigated the relationship between this newly developed test and fluid and crystallized intelligence, while controlling for computer usage, in two studies with adults (n1 = 120) and vocational high school students (n2 = 171). Structural equation modeling was used to investigate the amount of unique variance explained by each predictor. In both studies, about 85% of the variance in the digital literacy factor could be explained by reasoning and knowledge while computer usage did not add to the variance explained. In Study 2, prior health-related knowledge was included as a predictor instead of general knowledge. While the influence of fluid intelligence remained significant, prior knowledge strongly influenced digital literacy (β=.81). Together both predictor variables explained digital literacy exhaustively. These findings are in line with the view that knowledge is a major determinant of higher-level cognition. Further implications about the influence of the restrictiveness of the testing environment are discussed.
Reference. Schroeders, U., Bucholtz, N., Formazin, M., & Wilhelm, O. (2013). Modality specificity of comprehension abilities in the sciences. European Journal of Psychological Assessment, 29, 3–11. doi:10.1027/1015-5759/a000114
Abstract. The measurement of science achievement is often unnecessarily restricted to the presentation of reading comprehension items that are sometimes enriched with graphs, tables, and figures. In a newly developed viewing comprehension task, participants watched short videos covering different science topics and were subsequently asked several multiple-choice comprehension questions. Research questions were whether viewing comprehension (1) can be measured adequately, (2) is perfectly collinear with reading comprehension, and (3) can be regarded as a linear function of reasoning and acquired knowledge. High-school students (N = 216) worked on a paperbased reading comprehension task, a viewing comprehension task delivered on handheld devices, a sciences knowledge test, and three fluid intelligence measures. The data show that, first, the new viewing comprehension test worked psychometrically fine; second, performance in both comprehension tasks was essentially perfectly collinear; third, fluid intelligence and domain-specific knowledge fully accounted for the ability to comprehend texts and videos. We conclude that neither test medium (paper-pencil versus handheld device) nor test modality (reading versus viewing) are decisive for comprehension ability in the natural sciences. Fluid intelligence and, even more strongly, domain-specific knowledge turned out to be exhaustive predictors of comprehension performance.
Reference. Schroeders, U., & Wilhelm, O. (2011). Equivalence of reading and listening comprehension across test media. Educational and Psychological Measurement, 71, 849–869. doi:10.1177/0013164410391468
Abstract. Whether an ability test delivered on either paper or computer provides the same information is an important question in applied psychometrics. Besides the validity, it is also the fairness of a measure that is at stake if the test medium affects performance. This study provides a comprehensive review of existing equivalence research in the field of reading and listening comprehension in English as a foreign language and specifies factors that are likely to have an impact on equivalence. Taking into account these factors, comprehension measures were developed and tested with N = 442 high school students. Using multigroup confirmatory factor analysis, it is shown that reading and listening comprehension both were measurement invariant across test media. Nevertheless, it is argued that equivalence of data gathered on paper and computer depends on the specific measure or construct, the participants or the recruitment mechanisms, and the software and hardware realizations. Therefore, equivalence research is required for specific instantiations unless generalizable knowledge about factors affecting equivalence is available. Multigroup confirmatory factor analysis is an appropriate and effective tool for the assessment of the comparability of test scores across test media.
Reference. Schroeders, U., & Wilhelm, O. (2011). Computer usage questionnaire: Structure, correlates, and gender differences. Computers in Human Behavior, 27, 899–904. doi: 10.1016/j.chb.2010.11.015
Abstract.Computer usage, computer experience, computer familiarity, and computer anxiety are often discussed as constructs potentially compromising computer-based ability assessment. After presenting and discussing these constructs and associated measures we introduce a brief new questionnaire assessing computer usage. The self-report measure consists of 18 questions asking for the frequency of different computer activities and software usage. Participants were N = 976 high school students who completed the questionnaire and several covariates. Based on theoretical considerations and data driven adjustments a model with a general computer usage factor and three nested content factors (Office, Internet, and Games) is established for a subsample (n = 379) and cross-validated with the remaining sample (n = 597). Weak measurement invariance across gender groups could be established using multi-group confirmatory factor analysis. Differential relations between the questionnaire factors and self-report scales of computer usage, self-concept, and evaluation are reported separately for females and males. It is concluded that computer usage is distinct from other behavior oriented measurement approaches and that it shows a diverging, gender-specific pattern of relations with fluid and crystallized intelligence.