Tagged "assessment"

Tests-Questionnaires

120 item gc test This is a 120 item measure of crystallized intelligence (gc), more precisely, declarative knowledge. Based on previous findings concerning the dimensionality of gc (Steger et al., 2019), we sampled items from four broad knowledge areas - humanities, life sciences, natural sciences, and social sciences. Each knowledge area contained three domains with ten items each, resulting in a total of 120 items. Items were selected to have a wide range of difficulty and to broadly and deeply cover the content domain.

Research

Intelligence Research Our understanding of intelligence has been — and still is — significantly influenced by the development and application of new testing procedures as well as novel computational and statistical methods. In science, methodological developments typically follow new theoretical ideas. In intelligence research, however, great breakthroughs often followed the reverse order. For instance, the once-novel factor analytic tools preceded and facilitated new theoretical ideas such as the theory of multiple group factors of intelligence.

New Methods and Assessment Approaches in Intelligence Research

Maybe you have seen my recent Tweet: Please share this call and contribute to a new Special Issue on "New Methods and Assessment Approaches in Intelligence Research" in the @Jintelligence1, we are guest-editing together with Hülür, @HildePsych, and @pdoebler. More information: https://t.co/PevdPeyRgm pic.twitter.com/Y6hRllQa8m — Ulrich Schroeders (@Navajoc0d3) November 11, 2018 And this is the complete Call for the Special Issue in the Journal of Intelligence Dear Colleagues, Our understanding of intelligence has been—and still is—significantly influenced by the development and application of new computational and statistical methods, as well as novel testing procedures.

Meta-Analysis proctored vs. unproctored assessment

Our meta-analysis – Steger, Schroeders, & Gnambs (2018) – comparing test-scores of proctored vs. unproctored assessment is now available as online first publication and sometime in the future to be published in the European Journal of Psychological Assessment. In more detail, we examined mean score differences and correlations between both assessment contexts with a three-level random-effects meta-analysis based on 49 studies with 109 effect sizes. We think this is a timely topic since web-based assessments are frequently compromised by a lack of control over the participants' test-taking behavior, but researchers are nevertheless in the need to compare the data obtained through unproctored test conditions with data from controlled settings.

Recalculating df in MGCFA testing

function recalculate_df() { var nind = parseInt(document.getElementById('num1').value); var nlat = parseInt(document.getElementById('num2').value); var ncross = parseInt(document.getElementById('num3').value); var northo = parseInt(document.getElementById('num4').value); var nres = parseInt(document.getElementById('num5').value); var ngroup = parseInt(document.getElementById('num6').value); var answer1 = document.getElementById('df_conf'); var answer2 = document.getElementById('df_metr'); var answer3 = document.getElementById('df_scal'); var answer4 = document.getElementById('df_resi'); var answer5 = document.getElementById('df_stri'); var answer6 = document.getElementById('delta_1'); var answer7 = document.getElementById('delta_2'); var answer8 = document.getElementById('delta_3'); var answer9 = document.getElementById('delta_4'); obs = ((nind*(nind+1)/2) + nind) * ngroup ; est = ((2*nind + (nind + ncross) + ((nlat-northo)*((nlat-northo)-1)/2)) + nres) * ngroup ; answer1.

The Rosenberg Self-Esteem Scale - A drosophila melanogaster of psychological assessment

I had the great chance to co-author two recent publications of Timo Gnambs, both dealing with the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965). As a reminder, the RSES is a popular ten item self-report instrument measuring a respondent’s global self-worth and self-respect. But basically both papers are not about the RSES per se, rather they are applications of two recently introduced powerful and flexible extensions of the Structural Equation Modeling (SEM) Framework: Meta-Analytic Structural Equation Modeling (MASEM) and Local Weighted Structural Equation Modeling (LSEM), which will be described in more detail later on.

Equivalence of screen versus print reading comprehension depends on task complexity and proficiency

Reference. Lenhard, W., Schroeders, U., & Lenhard, A. (2017). Equivalence of screen versus print reading comprehension depends on task complexity and proficiency. Discourse Processes, 54(5-6), 427–445. doi: https://doi.org/10.1080/0163853X.2017.1319653 Abstract. As reading and reading assessment become increasingly implemented on electronic devices, the question arises whether reading on screen is comparable with reading on paper. To examine potential differences, we studied reading processes on different proficiency and complexity levels. Specifically, we used data from the standardization sample of the German reading comprehension test ELFE II (n = 2,807), which assesses reading at word, sentence, and text level with separate speeded subtests.