Tagged "open science"

Method-Toolbox

Sample size estimation in Item Response Theory

Although Item Response Theory (IRT) models offer well-established psychometric advantages over traditional scoring methods, they have been largely confined to specific areas of psychology, such as educational assessment and personnel selection, while their broader potential remains underutilized in practice. One reason for this is the challenge of meeting the (presumed) larger sample size requirements, especially in complex measurement designs. Accurate a priori sample size estimation is essential for obtaining accurate estimates of item/person parameters, effects, and model fit. As such, it serves as an essential tool for effective study planning, especially in pre-registration and registered reports.

Tests-Questionnaires

A 120 item gc test

This is a 120 item measure of crystallized intelligence (gc), more precisely, declarative knowledge. Based on previous findings concerning the dimensionality of gc (Steger et al., 2019), we sampled items from four broad knowledge areas - humanities, life sciences, natural sciences, and social sciences. Each knowledge area contained three domains with ten items each, resulting in a total of 120 items. Items were selected to have a wide range of difficulty and to broadly and deeply cover the content domain. Items are available in German and English. The development of the inital item pool is detailed in Steger et al. (2019). We used and described the 120 item gc measure in two recent publications (Schroeders et al., 2021; Watrin et al., 2021). The items can be found in the accompanying OSF project.

The Rosenberg Self-Esteem Scale - A drosophila melanogaster of psychological assessment

I had the great chance to co-author two recent publications of Timo Gnambs, both dealing with the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965). As a reminder, the RSES is a popular ten item self-report instrument measuring a respondent’s global self-worth and self-respect. But basically both papers are not about the RSES per se, rather they are applications of two recently introduced powerful and flexible extensions of the Structural Equation Modeling (SEM) Framework: Meta-Analytic Structural Equation Modeling (MASEM) and Local Weighted Structural Equation Modeling (LSEM), which will be described in more detail later on.

Commitment to research transparency and open science

I signed the Commitment to Research Transparency and Open Science, which was initially worded by Felix Schönbrodt, Markus Maier, Moritz Heene, and Michael Zehetleitner from the LMU Munich. The first paragraph of this commitment summarizes the overall aim:


We embrace the values of openness and transparency in science. We believe that such research practices increase the informational value and impact of our research, as the data can be reanalyzed and synthesized in future studies. Furthermore, they increase the credibility of the results, as independent verification of the findings is possible.

Meta-heuristics in short scale construction

Reference. Schroeders, U., Wilhelm, O., & Olaru, G. (2016). Meta-heuristics in short scale construction: Ant Colony Optimization and Genetic Algorithm. PLOS ONE, 11, e0167110. doi:10.1371/journal.pone.0167110

Abstract. The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored userdefined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.