Tagged "MGCFA"

Testing for equivalence of test data across media

In 2009, I wrote a small chapter that was part of an EU conference book on the transition to computer-based assessment. Now and then I’m coming back to this piece of work - in my teaching and my publications (e.g., the EJPA paper on testing reasoning ability across different devices). Now I want to make it publically available. Hopefully, it will be interesting to some of you. The chapter is the (unaltered) preprint version of the book chapter, so if you want to cite it, please use the following citation:

Pitfalls in measurement invariance testing

In a new paper in the European Journal of Psychological Assessment, Timo Gnambs and I examined the soundness of reporting measurement invariance (MI) testing in the context of multigroup confirmatory factor analysis (MGCFA). Of course, there are several good primers on MI testing (e.g., Cheung & Rensvold, 2002; Wicherts & Dolan, 2010) and textbooks that elaborate on the theoretical base (e.g., Millsap, 2011), but a clearly written tutorial with example syntax how to implement MI practically was still missing.

Recalculating df in MGCFA testing

Plase cite as follows: Schroeders, U., & Gnambs, T. (2020). Degrees of freedom in multigroup confirmatory factor analyses: Are models of measurement invariance testing correctly specified? European Journal of Psychological Assessment, 36(1), 105–113. https://doi.org/10.1027/1015-5759/a000500 A. Number of indicators C. Number of cross-loadings E. Number of resid. covar. B. Number of factors D. Number of ortho. factors F. Number of groups MI testing constraints df comparison delta(df) config. (item:factor) 0 - - metric (loadings) 2 metric-config 2 scalar (loadings+intercepts) 4 scalar-metric 2 residual (loadings+residuals) 5 residual-metric 3 strict (loadings+intercepts+residuals) 7 strict-scalar 3 Additional information A Indicates the number of indicators or items.