crystallized intelligence

Ecological momentary assessment of digital literacy: Influence of fluid and crystallized intelligence, domain-specific knowledge, and computer usage

IntelligenceReference. Moehring, A., Schroeders, U., Leichtmann, B., & Wilhelm, O. (2016). Ecological momentary assessment of digital literacy: Influence of fluid and crystallized intelligence, domain-specific knowledge, and computer usage. Intelligence, 59, 170–180. http://dx.doi.org/10.1016/j.intell.2016.10.003

Abstract. The ability to comprehend new information is closely related to the successful acquisition of new knowledge. With the ubiquitous availability of the Internet, the procurement of information online constitutes a key aspect in education, work, and our leisure time. In order to investigate individual differences in digital literacy, test takers were presented with health-related comprehension problems with task-specific time restrictions. Instead of reading a given text, they were instructed to search the Internet for the information required to answer the questions. We investigated the relationship between this newly developed test and fluid and crystallized intelligence, while controlling for computer usage, in two studies with adults (n1 = 120) and vocational high school students (n2 = 171). Structural equation modeling was used to investigate the amount of unique variance explained by each predictor. In both studies, about 85% of the variance in the digital literacy factor could be explained by reasoning and knowledge while computer usage did not add to the variance explained. In Study 2, prior health-related knowledge was included as a predictor instead of general knowledge. While the influence of fluid intelligence remained significant, prior knowledge strongly influenced digital literacy (β=.81). Together both predictor variables explained digital literacy exhaustively. These findings are in line with the view that knowledge is a major determinant of higher-level cognition. Further implications about the influence of the restrictiveness of the testing environment are discussed.

Do the smart get smarter? Development of fluid and crystallized intelligence in 3rd grade

IntelligenceReference. Schroeders, U., Schipolowski, S., Zettler, I., Golle, J., & Wilhelm, O. (2016). Do the smart get smarter? Development of fluid and crystallized intelligence in 3rd grade. Intelligence, 59, 84–95. https://doi.org/10.1016/j.intell.2016.08.003

Abstract. There are conflicting theoretical assumptions about the development of general cognitive abilities in childhood: On the one hand, a higher initial level of abilities has been suggested to facilitate ability improvement, for example, prior knowledge fosters the acquisition of new knowledge (Matthew effect). On the other hand, it has been argued that school education with its special focus on promoting less able students results in a compensation effect. A third hypothesis is that the development of cognitive abilities is—as an outcome of the opposing effects—overall independent of the initial state. In this study, 1,102 elementary students in 3rd Grade worked on two versions of the Berlin Test of Fluid and Crystallized Intelligence at two time points with an interval of five months. Beside the question of how initial state and growth are related (Matthew vs. compensation effect), we considered performance gains in fluid intelligence (gf) and crystallized intelligence (gc) as well as cross-lagged effects in a bivariate latent change score model. Both for gf and gc there was a strong compensation effect. Mean change was more pronounced in gf than in gc. We considered student characteristics (interest and self-concept), family background (socio-economic status, parental education) and classroom characteristics (teaching styles) in a series of prediction models to explain these changes in gf and gc. Although several predictors were included, only few had a significant contribution. Several methodological and content-related reasons are discussed to account for the unexpectedly negligible effects found for most of the covariates.

The influence of item sampling on sex differences in knowledge tests

IntelligenceReference. Schroeders, U., Wilhelm, O., & Olaru, G. (2016). The influence of item sampling on sex differences in knowledge tests. Intelligence, 58, 22–32. doi:10.1016/j.intell.2016.06.003

Abstract. Few topics in psychology have generated as much controversy as sex differences in intelligence. For fluid intelligence, researchers emphasize the high overlap between the ability distributions of males and females, whereas research on sex differences in declarative knowledge often uncovers a male advantage. However, on the level of knowledge domains, a more nuanced picture emerged: while females perform better in health-related topics (e.g., aging, medicine), males outperform females in domains of natural sciences (e.g., engineering, physics). In this paper we show that sex differences vary substantially depending on item sampling. Analyses were based on a sample of n = 3,306 German high-school students (Grades 9 and 10) who worked on the 64 declarative knowledge items of the Berlin Test of Fluid and Crystallized Intelligence (BEFKI) assessing knowledge within three broad content domains (science, humanities, social studies). Using two strategies of item sampling — stepwise confirmatory factor analysis and ant colony optimization algorithm — we deliberately manipulate sex differences in multi-group structural equation models. Results show that sex differences considerably vary depending on the indicators drawn from the item pool. Furthermore, ant colony optimization outperforms the simple stepwise selection strategy since it can optimize several criteria simultaneously (model fit, reliability, and preset sex differences). Taken together, studies reporting sex differences in declarative knowledge fail to acknowledge item sampling issues. On a more general stance, handling item sampling hinges on profound considerations of the content of measures.

Comment. I presented the central findings of this paper at the 50th Conference of the German Society for Psychology in a symposium on Current methods in intelligence assessment: invariance, item sampling and scoring, which I organized together with Philipp Doebler. For everyone who is interest, but didn’t made to the session, here are the slides.
itemsampling_dgps16

Pitfalls and challenges in constructing short forms of cognitive ability measures

Journal of Individual DifferencesReference. Schipolowski, S., Schroeders, U., & Wilhelm, O. (2014). Pitfalls and challenges in constructing short forms of cognitive ability measures. Journal of Individual Differences, 35, 190–200. doi:10.1027/1614-0001/a000134

Abstract. Especially in survey research and large-scale assessment there is a growing interest in short scales for the cost-efficient measurement of psychological constructs. However, only relatively few standardized short forms are available for the measurement of cognitive abilities. In this article we point out pitfalls and challenges typically encountered in the construction of cognitive short forms. First we discuss item selection strategies, the analysis of binary response data, the problem of floor and ceiling effects, and issues related to measurement precision and validity. We subsequently illustrate these challenges and how to deal with them based on an empirical example, the development of short forms for the measurement of crystallized intelligence. Scale shortening had only small effects on associations with covariates. Even for an ultra-short six-item scale, a unidimensional measurement model showed excellent fit and yielded acceptable reliability. However, measurement precision on the individual level was very low and the short forms were more likely to produce skewed score distributions in ability-restricted subpopulations. We conclude that short scales may serve as proxies for cognitive abilities in typical research settings, but their use for decisions on the individual level should be discouraged in most cases.

Age-related changes in the mean and covariance structure of fluid and crystallized intelligence in childhood and adolescence

IntelligenceReference. Schroeders, U., Schipolowski, S., & Wilhelm, O. (2015). Age-related changes in the mean and covariance structure of fluid and crystallized intelligence in childhood and adolescence. Intelligence, 48, 15–29. doi:10.1016/j.intell.2014.10.006

Abstract. Evidence on age-related differentiation in the structure of cognitive abilities in childhood and adolescence is still inconclusive. Previous studies often focused on the interrelations or the g-saturation of broad ability constructs, neglecting abilities on lower strata. In contrast, we investigated differentiation in the internal structure of fluid intelligence/gf (with verbal, numeric, and figural reasoning) and crystallized intelligence/gc (with knowledge in the natural sciences, humanities, and social studies). To better understand the development of reasoning and knowledge during secondary education, we analyzed data from 11,756 students attending Grades 5 to 12. Changes in both the mean structure and the covariance structure were estimated with locally-weighted structural equation models that allow handling age as a continuous context variable. To substantiate a potential influence of school tracking (i.e., different learning environments), analyses were additionally conducted separated by school track (academic vs. nonacademic). Mean changes in gf and gc were approximately linear in the total sample, with a steeper slope for the latter. There was little indication of age-related differentiation for the different reasoning facets and knowledge domains. The results suggest that the relatively homogeneous scholastic learning environment in secondary education prevents the development of more pronounced ability or knowledge profiles.