Tagged "vocabulary"

Tests-Questionnaires

NOVA - Next-Generation Open Vocabulary Assessment

NOVA (= Next-Generation Open Vocabulary Assessment) are two openly available, parallel vocabulary tests designed to measure the receptive vocabulary of German-speaking adults. Given the scarcity of modern, non-proprietary instruments, NOVA was developed to fill this gap, using Ant Colony Optimization to ensure high reliability, appropriate item difficulty and discrimination, and close parallelism across forms. The tests showed high conditional reliability in the lower ability range, making them well suited for individual assessment in neuropsychological contexts, and correlated strongly with a test of declarative knowledge. The test development, including the construction rationale, and the psychometric prorperties are described in detail in Schroeders & Achaa-Amankwaa, 2025. The norms are based on a large, heterogeneous sample of adults (N = 1,052). A Shiny app is available for scoring, allowing users to compute IRT-based norm scores and percentile ranks from individual response patterns. The items are available in the OSF project.

Meta-heuristics in short scale construction

Reference. Schroeders, U., Wilhelm, O., & Olaru, G. (2016). Meta-heuristics in short scale construction: Ant Colony Optimization and Genetic Algorithm. PLOS ONE, 11, e0167110. doi:10.1371/journal.pone.0167110

Abstract. The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored userdefined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.