Tagged "measurement invariance"

Method-Toolbox

Sample size estimation in Item Response Theory Although Item Response Theory (IRT) models offer well-established psychometric advantages over traditional scoring methods, they remain underutilized in practice. One reason for this is the challenge meeting the (presumably) larger sample size requirements, especially in complex measurement designs. Accurate a priori sample size estimation is essential for obtaining accurate estimates of item/person parameters, effects, and model fit. As such, it serves as an essential tool for effective study planning, especially in pre-registration and registered reports.

Science self-concept – More than the sum of its parts?

The article “Science Self-Concept – More Than the Sum of its Parts?” has now been published in “The Journal of Experimental Education” (btw in existence since 1932). The first 50 copies are free, in case you are interested. My first preprint. πŸ˜€Is a general science self-concept equivalent to an aggregated subject-specific science concept? It's about different modeling approaches, measurement invariance and concepts of equivalence. Check it out! Comment if you like: https://t.

Testing for equivalence of test data across media

In 2009, I wrote a small chapter that was part of an EU conference book on the transition to computer-based assessment. Now and then I’m coming back to this piece of work - in my teaching and my publications (e.g., the EJPA paper on testing reasoning ability across different devices). Now I want to make it publically available. Hopefully, it will be interesting to some of you. The chapter is the (unaltered) preprint version of the book chapter, so if you want to cite it, please use the following citation:

Pitfalls in measurement invariance testing

In a new paper in the European Journal of Psychological Assessment, Timo Gnambs and I examined the soundness of reporting measurement invariance (MI) testing in the context of multigroup confirmatory factor analysis (MGCFA). Of course, there are several good primers on MI testing (e.g., Cheung & Rensvold, 2002; Wicherts & Dolan, 2010) and textbooks that elaborate on the theoretical base (e.g., Millsap, 2011), but a clearly written tutorial with example syntax how to implement MI practically was still missing.

Longitudinal measurement invariance testing with categorical data

In a recent paper – Edossa, Schroeders, Weinert, & Artelt, 2018 – we came across the issue of longitudinal measurement invariance testing with categorical data. There are quite good primers and textbooks on longitudinal measurement invariance testing with continuous data (e.g., Geiser, 2013). However, at the time of writing the manuscript there wasn’t an application of measurement invariance testing in the longitudinal run with categorical data. In case your are interest in using such an invariance testing procedure, we uploaded the R syntax for all measurement invariance steps.

Equivalence of screen versus print reading comprehension depends on task complexity and proficiency

Reference. Lenhard, W., Schroeders, U., & Lenhard, A. (2017). Equivalence of screen versus print reading comprehension depends on task complexity and proficiency. Discourse Processes, 54(5-6), 427–445. doi: https://doi.org/10.1080/0163853X.2017.1319653 Abstract. As reading and reading assessment become increasingly implemented on electronic devices, the question arises whether reading on screen is comparable with reading on paper. To examine potential differences, we studied reading processes on different proficiency and complexity levels. Specifically, we used data from the standardization sample of the German reading comprehension test ELFE II (n = 2,807), which assesses reading at word, sentence, and text level with separate speeded subtests.