Ulrich Schroeders | Psychological Assessment

Age-related nuances in knowledge assessment - A modeling perspective

This is the second post in a series on a recent paper entitled “Age-related nuances in knowledge assessment” that we wrote with Luc Watrin and Oliver Wilhelm. The first post dealt with the way how we conceptualize the organization of knowledge in a hierarchy in a multidimensional knowledge space. The second post reflects on the way we measure or model knowledge. In textbooks knowledge assessments have a special standing, because they can be modeled both from a reflective and a formative perspective.

Age-related nuances in knowledge assessment - A hierarchy of knowledge

We published a new paper entitled “Age-related nuances in knowledge assessment” in Intelligence. I really like this paper because it deals with on the way we assess, model, and understand knowledge. And, btw, it employs machine learning methods. Thus, both in terms of content and methodology it hopefully sets a stage for future research avenues that are promising to follow up on. I would like to cover some of the key findings in a series of blog posts.

Science Self-Concept – More Than the Sum of its Parts?

The article “Science Self-Concept – More Than the Sum of its Parts?” has now been published in “The Journal of Experimental Education” (btw in existence since 1932). The first 50 copies are free, in case you are interested. My first preprint. πŸ˜€Is a general science self-concept equivalent to an aggregated subject-specific science concept? It's about different modeling approaches, measurement invariance and concepts of equivalence. Check it out! Comment if you like: https://t.

Testing for equivalence of test data across media

In 2009, I wrote a small chapter that was part of an EU conference book on the transition to computer-based assessment. Now and then I’m coming back to this piece of work - in my teaching and my publications (e.g., the EJPA paper on testing reasoning ability across different devices). Now I want to make it publically available. Hopefully, it will be interesting to some of you. The chapter is the (unaltered) preprint version of the book chapter, so if you want to cite it, please use the following citation:

Pitfalls in measurement invariance testing

In a new paper in the European Journal of Psychological Assessment, Timo Gnambs and I examined the soundness of reporting measurement invariance (MI) testing in the context of multigroup confirmatory factor analysis (MGCFA). Of course, there are several good primers on MI testing (e.g., Cheung & Rensvold, 2002; Wicherts & Dolan, 2010) and textbooks that elaborate on the theoretical base (e.g., Millsap, 2011), but a clearly written tutorial with example syntax how to implement MI practically was still missing.

Page 1 of 3