Reference. Schroeders, U., Bucholtz, N., Formazin, M., & Wilhelm, O. (2013). Modality specificity of comprehension abilities in the sciences. European Journal of Psychological Assessment, 29, 3–11. doi:10.1027/1015-5759/a000114
Abstract. The measurement of science achievement is often unnecessarily restricted to the presentation of reading comprehension items that are sometimes enriched with graphs, tables, and figures. In a newly developed viewing comprehension task, participants watched short videos covering different science topics and were subsequently asked several multiple-choice comprehension questions. Research questions were whether viewing comprehension (1) can be measured adequately, (2) is perfectly collinear with reading comprehension, and (3) can be regarded as a linear function of reasoning and acquired knowledge. High-school students (N = 216) worked on a paperbased reading comprehension task, a viewing comprehension task delivered on handheld devices, a sciences knowledge test, and three fluid intelligence measures. The data show that, first, the new viewing comprehension test worked psychometrically fine; second, performance in both comprehension tasks was essentially perfectly collinear; third, fluid intelligence and domain-specific knowledge fully accounted for the ability to comprehend texts and videos. We conclude that neither test medium (paper-pencil versus handheld device) nor test modality (reading versus viewing) are decisive for comprehension ability in the natural sciences. Fluid intelligence and, even more strongly, domain-specific knowledge turned out to be exhaustive predictors of comprehension performance.
References. Schroeders, U., Wilhelm, O., & Bucholtz, N. (2010). Reading, listening, and viewing comprehension in English as a foreign language: One or more constructs? Intelligence, 38, 562–573. doi: 10.1016/j.intell.2010.09.003
Abstract. Receptive foreign language proficiency is usually measured with reading and listening comprehension tasks. A novel approach to assess such proficiencies – viewing comprehension – is based on the presentation of short instructional videos followed by one ormore comprehension questions concerning the preceding video stimulus. In order to evaluate a newly developed viewing comprehension test 485 German high school students completed reading, listening, and viewing comprehension tests, all measuring the receptive proficiency in English as a foreign language. Fluid and crystallized intelligencewere measured as predictors of performance. Relative to traditional comprehension tasks, the viewing comprehension task has similar psychometric qualities. The three comprehension tests are very highly but not perfectly correlated with each other. Relations with fluid and crystallized intelligence show systematic differences between the three comprehension tasks. The high overlap between foreign language comprehensionmeasures and between crystallized intelligence and language comprehension ability can be taken as support for a uni-dimensional interpretation. Implications for the assessment of language proficiency are discussed.
Reference. Schroeders, U. & Wilhelm, O. (2010). Testing reasoning ability with handheld computers, notebooks, and paper and pencil. European Journal of Psychological Assessment, 26, 284–292. doi: 10.1027/1015-5759/a000038
Abstract.Electronic devices can be used to enhance or improve cognitive ability testing. We compared three reasoning-ability measures delivered on handheld computers, notebooks, and paper-and-pencil to test whether or not the same underlying abilities were measured irrespective of the test medium. Rational item-generative principles were used to generate parallel item samples for a verbal, a numerical, and a figural reasoning test, respectively. All participants, 157 high school students, completed the three measures on each test medium. Competing measurement models were tested with confirmatory factor analyses. Results show that 2 test-medium factors for tests administrated via notebooks and handheld computers, respectively, had small to negligible loadings, and that the correlation between these factors was not substantial. Overall, test medium was not a critical source of individual differences. Perceptual and motor skills are discussed as potential causes for test-medium factors.
Reference. Schroeders, U., Wilhelm, O., & Schipolowski, S. (2010). Internet-based ability testing. In S. D. Gosling, & J. A. Johnson (Eds.), Advanced methods for behavioral research on the Internet (pp. 131–148). Washington, DC: American Psychological Association.
Abstract. Compared with traditional paper-and-pencil-tests (PPTs), Internet-based ability testing (IBAT) seems to offer a plethora of advantages-cost-effective data collection 24/7 from all over the world, enriching static content by implementing audio and video, registering auxiliary data such as reaction times, and storing data at some central location in a digital format-all this seems like the fulfillment of a researcher’s dream. However, despite these auspicious possibilities, the dream can rapidly dissolve if the accompanying constraints and limitations of testing via the Internet are neglected. This chapter focuses on ways to effectively use the Internet for ability testing and on procedures that might help overcome the inherent shortcomings of the medium. Wherever possible, we highlight how to minimize the adverse effects and provide practical advice to improve Web-based measures. We offer a detailed step-by-step guide that helps structure the entire testing process-from the definition of the construct being assessed through the presentation of the findings. At the end of each step, we present an example of our own ongoing research that illustrates how to implement the step. Specifically, we show that IBAT can be used as a method to quickly gather information about characteristics of a measure in an early stage of the development. We hope to provide information that helps you to decide which kinds of ability tests can be administered via the Internet and for which purposes, and we give practical advice on how to implement your own test on the Internet. For an easy start, the provided code can be easily altered to meet specific needs. We also list links to detailed information on PHP, a powerful programming language enabling researchers to use the whole functionality of a pc, for example, video streaming (see Additional Resources A, C). Drawbacks of IBAT might be a lack of experimental control with regard to individual differences in motivation and time-allocation, respectively, or inscrutable distortions due to self-selection processes. Also, programming an online measure can be quite time consuming. But once you overcome or know how to tackle these obstacles, IBAT is a powerful tool for collecting data independent of time and geographic constraints.