Ulrich Schroeders | Psychological Assessment

Science self-concept – More than the sum of its parts?

The article “Science Self-Concept – More Than the Sum of its Parts?” has now been published in “The Journal of Experimental Education” (btw in existence since 1932). The first 50 copies are free, in case you are interested.

In comparison to the preprint version, some substantial changes have been made to the final version of the manuscript, especially in the research questions and in the presentation of the results. Due to word restriction, we also removed a section from the discussion, in which we summarized differences and commonalities of the bifactor vs. higher-order models. We also speculated about why the type of modeling may also depend on the study’s subject, that is, on conceptual differences in intelligence vs. self-concept research. The argumentation may be a bit wonky, but at least I find the idea so persuasive that I want to reproduce it in the following. If you have any comments, please feel free to drop me a line.

Testing for equivalence of test data across media

In 2009, I wrote a small chapter that was part of an EU conference book on the transition to computer-based assessment. Now and then I’m coming back to this piece of work - in my teaching and my publications (e.g., the EJPA paper on testing reasoning ability across different devices). Now I want to make it publically available. Hopefully, it will be interesting to some of you. The chapter is the (unaltered) preprint version of the book chapter, so if you want to cite it, please use the following citation:

Pitfalls in measurement invariance testing

In a new paper in the European Journal of Psychological Assessment, Timo Gnambs and I examined the soundness of reporting measurement invariance (MI) testing in the context of multigroup confirmatory factor analysis (MGCFA). Of course, there are several good primers on MI testing (e.g., Cheung & Rensvold, 2002; Wicherts & Dolan, 2010) and textbooks that elaborate on the theoretical base (e.g., Millsap, 2011), but a clearly written tutorial with example syntax how to implement MI practically was still missing. In the first part of the paper, we demonstrate that a sobering large amount of reported degrees of freedom (df) do not match with the df recalculated based on information given in the articles. More specifically, we both reviewed 128 studies including 302 measurement invariance MGCFA testing procedures from six leading peer-reviewed journals that focus on psychological assessment and on a regular base. Overall, about a quarter of all articles included at least one discrepancy with some systematic differences between the journals. However, it was interesting to see that the metric and scalar step of invariance testing were more frequently affected.

New methods and assessment approaches in intelligence research

Maybe you have seen my recent Tweet:

And this is the complete Call for the Special Issue in the Journal of Intelligence

Dear Colleagues,
Our understanding of intelligence has been—and still is—significantly influenced by the development and application of new computational and statistical methods, as well as novel testing procedures. In science, methodological developments typically follow new theoretical ideas. In contrast, great breakthroughs in intelligence research followed the reverse order. For instance, the once-novel factor analytic tools preceded and facilitated new theoretical ideas such as the theory of multiple group factors of intelligence. Therefore, the way we assess and analyze intelligent behavior also shapes the way we think about intelligence.
We want to summarize recent and ongoing methodological advances inspiring intelligence research and facilitating thinking about new theoretical perspectives. This Special Issue will include contributions that:

Meta-analysis proctored vs. unproctored assessment

Our meta-analysis – Steger, Schroeders, & Gnambs (2018) – comparing test-scores of proctored vs. unproctored assessment is now available as online first publication and sometime in the future to be published in the European Journal of Psychological Assessment. In more detail, we examined mean score differences and correlations between both assessment contexts with a three-level random-effects meta-analysis based on 49 studies with 109 effect sizes. We think this is a timely topic since web-based assessments are frequently compromised by a lack of control over the participants’ test-taking behavior, but researchers are nevertheless in the need to compare the data obtained through unproctored test conditions with data from controlled settings. The inevitable question is to what extent such a comparison is feasible.

Page 2 of 4