Tagged "MGCFA"

Testing for equivalence of test data across media

In 2009, I wrote a small chapter that was part of an EU conference book on the transition to computer-based assessment. Now and then I’m coming back to this piece of work - in my teaching and my publications (e.g., the EJPA paper on testing reasoning ability across different devices). Now I want to make it publically available. Hopefully, it will be interesting to some of you. The chapter is the (unaltered) preprint version of the book chapter, so if you want to cite it, please use the following citation:

Pitfalls in measurement invariance testing

In a new paper in the European Journal of Psychological Assessment, Timo Gnambs and I examined the soundness of reporting measurement invariance (MI) testing in the context of multigroup confirmatory factor analysis (MGCFA). Of course, there are several good primers on MI testing (e.g., Cheung & Rensvold, 2002; Wicherts & Dolan, 2010) and textbooks that elaborate on the theoretical base (e.g., Millsap, 2011), but a clearly written tutorial with example syntax how to implement MI practically was still missing. In the first part of the paper, we demonstrate that a sobering large amount of reported degrees of freedom (df) do not match with the df recalculated based on information given in the articles. More specifically, we both reviewed 128 studies including 302 measurement invariance MGCFA testing procedures from six leading peer-reviewed journals that focus on psychological assessment and on a regular base. Overall, about a quarter of all articles included at least one discrepancy with some systematic differences between the journals. However, it was interesting to see that the metric and scalar step of invariance testing were more frequently affected.

Recalculating df in MGCFA testing


Plase cite as follows:
Schroeders, U., & Gnambs, T. (2020). Degrees of freedom in multigroup confirmatory factor analyses: Are models of measurement invariance testing correctly specified? European Journal of Psychological Assessment, 36(1), 105–113. https://doi.org/10.1027/1015-5759/a000500

A. Number of indicators
C. Number of cross-loadings
E. Number of resid. covar.
B. Number of factors
D. Number of ortho. factors
F. Number of groups



MI testing constraints df comparison delta(df)
config. (item:factor) 0 - -
metric (loadings) 2 metric-config 2
scalar (loadings+intercepts) 4 scalar-metric 2
residual (loadings+residuals) 5 residual-metric 3
strict (loadings+intercepts+residuals) 7 strict-scalar 3

Additional information

A Indicates the number of indicators or items.
B Indicates the number of latent variables or factors.
C Indicates the number of cross-loadings. For example, in case of a bifactor model the number equals twice the number of indicators (A).
D Indicates the number of orthogonal factors. For example, in case of a nested factor model with six indicators loading on a common factor and three items additionally loading on a nested factors, you have to specify 2 factors (B) and 1 orthogonal factor (D).
E Indicates the number of residual covariances.
F Indicates the number of groups.

Further reading

  • Beaujean, A. A. (2014). Latent variable modeling using R: a step by step guide. New York: Routledge/Taylor & Francis Group.
  • Millsap, R. E. & Olivera-Aguilar, M. (2012). Investigating measurement invariance using confirmatory factor analysis. In R. H. Hoyle (Ed.), Handbook of Structural Equation Modeling (pp. 380-392). New York: Guilford Press.
  • Kline, R. B. (2011). Principles and practice of structural equation modeling. New York: Guilford Press.