Tagged "psychometrics"

Age-related nuances in knowledge assessment - Much ado about machine learning

This is the third post in a series on a paper — "Age-related nuances in knowledge assessment" — we recently published in Intelligence. The first post reflected on how knowledge is organized, the second post dealt with psychometric issues. This post is going to be more mathematical (yes, there will be some formulae) and it will be a cautionary note on the use of machine learning algorithms. Machine learning algorithms have positively influenced research in various scientific disciplines such as astrophysics, genetics, or medicine.

Age-related nuances in knowledge assessment - A modeling perspective

This is the second post in a series on a recent paper entitled “Age-related nuances in knowledge assessment” that we wrote with Luc Watrin and Oliver Wilhelm. The first post dealt with the way how we conceptualize the organization of knowledge in a hierarchy in a multidimensional knowledge space. The second post reflects on the way we measure or model knowledge. In textbooks knowledge assessments have a special standing, because they can be modeled both from a reflective and a formative perspective.

Method-Toolbox

Multigroup measurement invariance testing (in R und Mplus) Measurement invariance (MI) is a key concept in psychological assessment and a fundamental prerequisite for meaningful comparisons across groups. In the prevalent approach, multi-group confirmatory factor analysis (MGCFA), specific measurement parameters are constrained to equality across groups, to test for (a) configural MI, (b) metric MI, (c) scalar MI, and (d) strict MI. In the online supplement to Schroeders & Gnambs (2018), we provide example syntax for all steps of MI in lavaan and Mplus for different ways of scaling latent variables: Identification by (a) marker variable, (b) reference group, and (c) effects coding.

Age-related nuances in knowledge assessment - A hierarchy of knowledge

We published a new paper entitled “Age-related nuances in knowledge assessment” in Intelligence. I really like this paper because it deals with on the way we assess, model, and understand knowledge. And, btw, it employs machine learning methods. Thus, both in terms of content and methodology it hopefully sets a stage for future research avenues that are promising to follow up on. I would like to cover some of the key findings in a series of blog posts.

Research

Intelligence Research Our understanding of intelligence has been — and still is — significantly influenced by the development and application of new testing procedures as well as novel computational and statistical methods. In science, methodological developments typically follow new theoretical ideas. In intelligence research, however, great breakthroughs often followed the reverse order. For instance, the once-novel factor analytic tools preceded and facilitated new theoretical ideas such as the theory of multiple group factors of intelligence.

Science self-concept – More than the sum of its parts?

The article “Science Self-Concept – More Than the Sum of its Parts?” has now been published in “The Journal of Experimental Education” (btw in existence since 1932). The first 50 copies are free, in case you are interested. My first preprint. 😀Is a general science self-concept equivalent to an aggregated subject-specific science concept? It's about different modeling approaches, measurement invariance and concepts of equivalence. Check it out! Comment if you like: https://t.

Meta-heuristics in short scale construction

Reference. Schroeders, U., Wilhelm, O., & Olaru, G. (2016). Meta-heuristics in short scale construction: Ant Colony Optimization and Genetic Algorithm. PLOS ONE, 11, e0167110. doi:10.1371/journal.pone.0167110 Abstract. The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations.