Recent posts



Equivalence of screen versus print reading comprehension depends on task complexity and proficiency

discourse_processesReference. Lenhard, W., Schroeders, U., & Lenhard, A. (2017). Equivalence of screen versus print reading comprehension depends on task complexity and proficiency. Discourse Processes, 54(5-6), 427–445. doi: 10.1080/0163853X.2017.1319653

Abstract. As reading and reading assessment become increasingly implemented on electronic devices, the question arises whether reading on screen is comparable with reading on paper. To examine potential differences, we studied reading processes on different proficiency and complexity levels. Specifically, we used data from the standardization sample of the German reading comprehension test ELFE II (n = 2,807), which assesses reading at word, sentence, and text level with separate speeded subtests. Children from grades 1 to 6 completed either a test version on paper or via computer under time constraints. In general, children in the screen condition worked faster but at the expense of accuracy. This difference was more pronounced for younger children and at the word level. Based on our results, we suggest that remedial education and interventions for younger children using computer-based approaches should likewise foster speed and accuracy in a balanced way.


Commitment to Research Transparency and Open Science

I signed the Commitment to Research Transparency and Open Science, which was initially worded by Felix Schönbrodt, Markus Maier, Moritz Heene, and Michael Zehetleitner from the LMU Munich. The first paragraph of this commitment summarizes the overall aim:

„We embrace the values of openness and transparency in science. We believe that such research practices increase the informational value and impact of our research, as the data can be reanalyzed and synthesized in future studies. Furthermore, they increase the credibility of the results, as independent verification of the findings is possible.“

Of course, this has multiple consequences for all parts of my work:

  • For my own empirical research that includes 1. Open Data, 2. provide reproducible scripts, and 3. to adhere to the 21 word solution (Simmons, Nelson, & Simonsohn, 2011, 2012) to prevent false-positive psychology: „We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study“. 4. I try to convince the first author(s) of any co-authored publication to act accordingly.
  • For me as a reviewer, I will ask the authors to add a statement to the paper confirming whether, for all experiments, they have reported all measures, conditions, data exclusions, and how they determined their sample sizes, which is also known as the Standard Reviewer Statement for Disclosure of Sample, Conditions, Measures, and Exclusions. I’m really interested how this will turn out.
  • Commitments for the supervision of dissertations are substantial: 1. I will teach and emphasize „methods that enhance the informational value and the replicability of studies“, 2. Open Data, Open Materials, and reproducible scripts given to me as supervisor, 3. potential publications are expected to follow the commitments mentioned above, 4. in case of experimental research in a confirmatory manner, at least one pre-registered study has to be conducted „with a justifiable a priori power analysis (in the frequentist case), or a strong evidence threshold (e.g., if a sequential Bayes factor design is implemented)“, 5. grading is independent of successful publication or statistical significance.
  • Finally, as member of any committee or editorial board, I will promote the values of open science.

tl;dr
I embrace the current methodological revolution in psychology/science and signed the Commitment to Research Transparency and Open Science.

References

  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366. doi: 10.1177/0956797611417632
  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2012). A 21 word solution. Retrieved from http://ssrn.com/abstract=2160588 or http://dx.doi.org/10.2139/ssrn.2160588
  • The „Commitment to Research Transparency“ logo is licensed by Tobias Kächele, Lena Schiestel and Felix Schönbrodt under a Creative Commons Attribution 4.0 International License.


Meta-heuristics in short scale construction

Reference. Schroeders, U., Wilhelm, O., & Olaru, G. (2016). Meta-heuristics in short scale construction: Ant Colony Optimization and Genetic Algorithm. PloS One, 11, e0167110. doi.org/10.1371/journal.pone.0167110

Abstract. The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored userdefined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.

Comment. This is my first Open Access pubclication, funded by the University of Bamberg. With respect to Open Material, all syntax used is published on my GitHub-repository. Finally, in this paper data from the National Educational Panel Study (NEPS): Starting Cohort 4-9th Grade, doi:10.5157/NEPS:SC4:4.0.0. is used, that is, Open Data. Thus, hat trick: Open Access, Open Materials, and Open Data.


Wortschatztest

Liebe Teilnehmerin, lieber Teilnehmer des Wortschatztests,

vielen Dank, dass Sie an unserer Untersuchung teilgenommen haben. Sie können nun Ihre Ergebnisse unter Verwendung der Codenummer einsehen.

Bitte geben Sie Ihren Code ein:


Ecological momentary assessment of digital literacy: Influence of fluid and crystallized intelligence, domain-specific knowledge, and computer usage

IntelligenceReference. Moehring, A., Schroeders, U., Leichtmann, B., & Wilhelm, O. (2016). Ecological momentary assessment of digital literacy: Influence of fluid and crystallized intelligence, domain-specific knowledge, and computer usage. Intelligence, 59, 170–180. http://dx.doi.org/10.1016/j.intell.2016.10.003

Abstract. The ability to comprehend new information is closely related to the successful acquisition of new knowledge. With the ubiquitous availability of the Internet, the procurement of information online constitutes a key aspect in education, work, and our leisure time. In order to investigate individual differences in digital literacy, test takers were presented with health-related comprehension problems with task-specific time restrictions. Instead of reading a given text, they were instructed to search the Internet for the information required to answer the questions. We investigated the relationship between this newly developed test and fluid and crystallized intelligence, while controlling for computer usage, in two studies with adults (n1 = 120) and vocational high school students (n2 = 171). Structural equation modeling was used to investigate the amount of unique variance explained by each predictor. In both studies, about 85% of the variance in the digital literacy factor could be explained by reasoning and knowledge while computer usage did not add to the variance explained. In Study 2, prior health-related knowledge was included as a predictor instead of general knowledge. While the influence of fluid intelligence remained significant, prior knowledge strongly influenced digital literacy (β=.81). Together both predictor variables explained digital literacy exhaustively. These findings are in line with the view that knowledge is a major determinant of higher-level cognition. Further implications about the influence of the restrictiveness of the testing environment are discussed.


Do the smart get smarter? Development of fluid and crystallized intelligence in 3rd grade

IntelligenceReference. Schroeders, U., Schipolowski, S., Zettler, I., Golle, J., & Wilhelm, O. (2016). Do the smart get smarter? Development of fluid and crystallized intelligence in 3rd grade. Intelligence, 59, 84–95. https://doi.org/10.1016/j.intell.2016.08.003

Abstract. There are conflicting theoretical assumptions about the development of general cognitive abilities in childhood: On the one hand, a higher initial level of abilities has been suggested to facilitate ability improvement, for example, prior knowledge fosters the acquisition of new knowledge (Matthew effect). On the other hand, it has been argued that school education with its special focus on promoting less able students results in a compensation effect. A third hypothesis is that the development of cognitive abilities is—as an outcome of the opposing effects—overall independent of the initial state. In this study, 1,102 elementary students in 3rd Grade worked on two versions of the Berlin Test of Fluid and Crystallized Intelligence at two time points with an interval of five months. Beside the question of how initial state and growth are related (Matthew vs. compensation effect), we considered performance gains in fluid intelligence (gf) and crystallized intelligence (gc) as well as cross-lagged effects in a bivariate latent change score model. Both for gf and gc there was a strong compensation effect. Mean change was more pronounced in gf than in gc. We considered student characteristics (interest and self-concept), family background (socio-economic status, parental education) and classroom characteristics (teaching styles) in a series of prediction models to explain these changes in gf and gc. Although several predictors were included, only few had a significant contribution. Several methodological and content-related reasons are discussed to account for the unexpectedly negligible effects found for most of the covariates.


The influence of item sampling on sex differences in knowledge tests

IntelligenceReference. Schroeders, U., Wilhelm, O., & Olaru, G. (2016). The influence of item sampling on sex differences in knowledge tests. Intelligence, 58, 22–32. doi:10.1016/j.intell.2016.06.003

Abstract. Few topics in psychology have generated as much controversy as sex differences in intelligence. For fluid intelligence, researchers emphasize the high overlap between the ability distributions of males and females, whereas research on sex differences in declarative knowledge often uncovers a male advantage. However, on the level of knowledge domains, a more nuanced picture emerged: while females perform better in health-related topics (e.g., aging, medicine), males outperform females in domains of natural sciences (e.g., engineering, physics). In this paper we show that sex differences vary substantially depending on item sampling. Analyses were based on a sample of n = 3,306 German high-school students (Grades 9 and 10) who worked on the 64 declarative knowledge items of the Berlin Test of Fluid and Crystallized Intelligence (BEFKI) assessing knowledge within three broad content domains (science, humanities, social studies). Using two strategies of item sampling — stepwise confirmatory factor analysis and ant colony optimization algorithm — we deliberately manipulate sex differences in multi-group structural equation models. Results show that sex differences considerably vary depending on the indicators drawn from the item pool. Furthermore, ant colony optimization outperforms the simple stepwise selection strategy since it can optimize several criteria simultaneously (model fit, reliability, and preset sex differences). Taken together, studies reporting sex differences in declarative knowledge fail to acknowledge item sampling issues. On a more general stance, handling item sampling hinges on profound considerations of the content of measures.

Comment. I presented the central findings of this paper at the 50th Conference of the German Society for Psychology in a symposium on Current methods in intelligence assessment: invariance, item sampling and scoring, which I organized together with Philipp Doebler. For everyone who is interest, but didn’t made to the session, here are the slides.
itemsampling_dgps16


Typical intellectual engagement and achievement in math and the sciences in secondary education

LAIDReference. Schroeders, U., Schipolowski, S., & Böhme, K. (2015). Typical intellectual engagement and achievement in math and the sciences in secondary education. Learning and Individual Differences, 43, 31–38. doi:10.1016/j.lindif.2015.08.030

Abstract. Typical Intellectual Engagement (TIE) is considered a key trait in explaining individual differences in educational achievement in advanced academic or professional settings. Research in secondary education, however, has focused on cognitive and conative factors rather than personality. In the present large-scale study, we investigated the relation between TIE and achievement tests in math and science in Grade 9. A three-dimensional model (reading, contemplation, intellectual curiosity) provided high theoretical plausibility and satisfactory model fit. We quantified the predictive power of TIE with hierarchical regression models. After controlling for gender, migration background, and socioeconomic status, TIE contributed substantially to the explanation of math and science achievement. However, this effect almost disappeared after fluid intelligence and interest were added into the model. Thus,we found only limited support for the significance of TIE on educational achievement, at least for subjects more strongly relying on fluid abilities such as math and science.

Comment. You can also see the slides of a talk I will give on 14th September the „Fachgruppentagung Pädagogische Psychologie“ in Kassel:
TIE-PAEPS15-2015-09-04