Tagged "open material"

Meta-Analysis proctored vs. unproctored assessment

Our meta-analysis – Steger, Schroeders, & Gnambs (2018) – comparing test-scores of proctored vs. unproctored assessment is now available as online first publication and sometime in the future to be published in the European Journal of Psychological Assessment. In more detail, we examined mean score differences and correlations between both assessment contexts with a three-level random-effects meta-analysis based on 49 studies with 109 effect sizes. We think this is a timely topic since web-based assessments are frequently compromised by a lack of control over the participants’ test-taking behavior, but researchers are nevertheless in the need to compare the data obtained through unproctored test conditions with data from controlled settings.

Longitudinal Measurement Invariance Testing With Categorical Data

In a recent paper – Edossa, Schroeders, Weinert, & Artelt, 2018 – we came across the issue of longitudinal measurement invariance testing with categorical data. There are quite good primers and textbooks on longitudinal measurement invariance testing with continuous data (e.g., Geiser, 2013). However, at the time of writing the manuscript there wasn’t an application of measurement invariance testing in the longitudinal run with categorical data. In case your are interest in using such an invariance testing procedure, we uploaded the R syntax for all measurement invariance steps.

Recalculating df in MGCFA testing

function recalculate_df() { var nind = parseInt(document.getElementById('num1').value); var nlat = parseInt(document.getElementById('num2').value); var ncross = parseInt(document.getElementById('num3').value); var northo = parseInt(document.getElementById('num4').value); var nres = parseInt(document.getElementById('num5').value); var ngroup = parseInt(document.getElementById('num6').value); var answer1 = document.getElementById('df_conf'); var answer2 = document.getElementById('df_metr'); var answer3 = document.getElementById('df_scal'); var answer4 = document.getElementById('df_resi'); var answer5 = document.getElementById('df_stri'); var answer6 = document.getElementById('delta_1'); var answer7 = document.getElementById('delta_2'); var answer8 = document.getElementById('delta_3'); var answer9 = document.getElementById('delta_4'); obs = ((nind*(nind+1)/2) + nind) * ngroup ; est = ((2*nind + (nind + ncross) + ((nlat-northo)*((nlat-northo)-1)/2)) + nres) * ngroup ; answer1.

The Rosenberg Self-Esteem Scale - A drosophila melanogaster of psychological assessment

I had the great chance to co-author two recent publications of Timo Gnambs, both dealing with the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965). As a reminder, the RSES is a popular ten item self-report instrument measuring a respondent’s global self-worth and self-respect. But basically both papers are not about the RSES per se, rather they are applications of two recently introduced powerful and flexible extensions of the Structural Equation Modeling (SEM) Framework: Meta-Analytic Structural Equation Modeling (MASEM) and Local Weighted Structural Equation Modeling (LSEM), which will be described in more detail later on.