Tagged "assessment"

Research

Technology-Based Assessment In the last decades, the digitalization of educational content, the integration of computers in different educational settings and the opportunity to connect knowledge and people via the Internet has led to fundamental changes in the way we gather, process, and evaluate information. Also, more and more tablet PCs or notebooks are used in schools and—in comparison to traditional sources of information such as text books—the Internet seems to be more appealing, versatile, and accessible.

Bee Swarm Optimization (BSO)

“Bees are amazing, little creatures” (Richardson, 2017) – I agree. Bees have fascinated people since time immemorial, and yet even today there are still novel and fascinating discoveries (see the PLOS collection for some mind-boggling facts). Although bees as an insect species might seem as the prime example of state-building insects, highly social forms of community are the exception among bees. The large majority of all bee species are solitary bees or cuckoo bees that do not form insect states.

Tests-Questionnaires

A 120 item gc test This is a 120 item measure of crystallized intelligence (gc), more precisely, declarative knowledge. Based on previous findings concerning the dimensionality of gc (Steger et al., 2019), we sampled items from four broad knowledge areas - humanities, life sciences, natural sciences, and social sciences. Each knowledge area contained three domains with ten items each, resulting in a total of 120 items. Items were selected to have a wide range of difficulty and to broadly and deeply cover the content domain.

New methods and assessment approaches in intelligence research

Maybe you have seen my recent Tweet: Please share this call and contribute to a new Special Issue on "New Methods and Assessment Approaches in Intelligence Research" in the @Jintelligence1, we are guest-editing together with Hülür, @HildePsych, and @pdoebler. More information: https://t.co/PevdPeyRgm pic.twitter.com/Y6hRllQa8m — Ulrich Schroeders (@Navajoc0d3) November 11, 2018 And this is the complete Call for the Special Issue in the Journal of Intelligence Dear Colleagues, Our understanding of intelligence has been—and still is—significantly influenced by the development and application of new computational and statistical methods, as well as novel testing procedures.

Meta-analysis proctored vs. unproctored assessment

Our meta-analysis – Steger, Schroeders, & Gnambs (2018) – comparing test-scores of proctored vs. unproctored assessment is now available as online first publication and sometime in the future to be published in the European Journal of Psychological Assessment. In more detail, we examined mean score differences and correlations between both assessment contexts with a three-level random-effects meta-analysis based on 49 studies with 109 effect sizes. We think this is a timely topic since web-based assessments are frequently compromised by a lack of control over the participants’ test-taking behavior, but researchers are nevertheless in the need to compare the data obtained through unproctored test conditions with data from controlled settings.

Recalculating df in MGCFA testing

Plase cite as follows: Schroeders, U., & Gnambs, T. (2020). Degrees of freedom in multigroup confirmatory factor analyses: Are models of measurement invariance testing correctly specified? European Journal of Psychological Assessment, 36(1), 105–113. https://doi.org/10.1027/1015-5759/a000500 A. Number of indicators C. Number of cross-loadings E. Number of resid. covar. B. Number of factors D. Number of ortho. factors F. Number of groups MI testing constraints df comparison delta(df) config. (item:factor) 0 - - metric (loadings) 2 metric-config 2 scalar (loadings+intercepts) 4 scalar-metric 2 residual (loadings+residuals) 5 residual-metric 3 strict (loadings+intercepts+residuals) 7 strict-scalar 3 Additional information A Indicates the number of indicators or items.

The Rosenberg Self-Esteem Scale - A drosophila melanogaster of psychological assessment

I had the great chance to co-author two recent publications of Timo Gnambs, both dealing with the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965). As a reminder, the RSES is a popular ten item self-report instrument measuring a respondent’s global self-worth and self-respect. But basically both papers are not about the RSES per se, rather they are applications of two recently introduced powerful and flexible extensions of the Structural Equation Modeling (SEM) Framework: Meta-Analytic Structural Equation Modeling (MASEM) and Local Weighted Structural Equation Modeling (LSEM), which will be described in more detail later on.

Equivalence of screen versus print reading comprehension depends on task complexity and proficiency

Reference. Lenhard, W., Schroeders, U., & Lenhard, A. (2017). Equivalence of screen versus print reading comprehension depends on task complexity and proficiency. Discourse Processes, 54(5-6), 427–445. doi: https://doi.org/10.1080/0163853X.2017.1319653 Abstract. As reading and reading assessment become increasingly implemented on electronic devices, the question arises whether reading on screen is comparable with reading on paper. To examine potential differences, we studied reading processes on different proficiency and complexity levels. Specifically, we used data from the standardization sample of the German reading comprehension test ELFE II (n = 2,807), which assesses reading at word, sentence, and text level with separate speeded subtests.