Author: admin

Equivalence of screen versus print reading comprehension depends on task complexity and proficiency

discourse_processesReference. Lenhard, W., Schroeders, U., & Lenhard, A. (2017). Equivalence of screen versus print reading comprehension depends on task complexity and proficiency. Discourse Processes, 54(5-6), 427–445. doi: 10.1080/0163853X.2017.1319653

Abstract. As reading and reading assessment become increasingly implemented on electronic devices, the question arises whether reading on screen is comparable with reading on paper. To examine potential differences, we studied reading processes on different proficiency and complexity levels. Specifically, we used data from the standardization sample of the German reading comprehension test ELFE II (n = 2,807), which assesses reading at word, sentence, and text level with separate speeded subtests. Children from grades 1 to 6 completed either a test version on paper or via computer under time constraints. In general, children in the screen condition worked faster but at the expense of accuracy. This difference was more pronounced for younger children and at the word level. Based on our results, we suggest that remedial education and interventions for younger children using computer-based approaches should likewise foster speed and accuracy in a balanced way.

Commitment to Research Transparency and Open Science

I signed the Commitment to Research Transparency and Open Science, which was initially worded by Felix Schönbrodt, Markus Maier, Moritz Heene, and Michael Zehetleitner from the LMU Munich. The first paragraph of this commitment summarizes the overall aim:

„We embrace the values of openness and transparency in science. We believe that such research practices increase the informational value and impact of our research, as the data can be reanalyzed and synthesized in future studies. Furthermore, they increase the credibility of the results, as independent verification of the findings is possible.“

Of course, this has multiple consequences for all parts of my work:

  • For my own empirical research that includes 1. Open Data, 2. provide reproducible scripts, and 3. to adhere to the 21 word solution (Simmons, Nelson, & Simonsohn, 2011, 2012) to prevent false-positive psychology: „We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study“. 4. I try to convince the first author(s) of any co-authored publication to act accordingly.
  • For me as a reviewer, I will ask the authors to add a statement to the paper confirming whether, for all experiments, they have reported all measures, conditions, data exclusions, and how they determined their sample sizes, which is also known as the Standard Reviewer Statement for Disclosure of Sample, Conditions, Measures, and Exclusions. I’m really interested how this will turn out.
  • Commitments for the supervision of dissertations are substantial: 1. I will teach and emphasize „methods that enhance the informational value and the replicability of studies“, 2. Open Data, Open Materials, and reproducible scripts given to me as supervisor, 3. potential publications are expected to follow the commitments mentioned above, 4. in case of experimental research in a confirmatory manner, at least one pre-registered study has to be conducted „with a justifiable a priori power analysis (in the frequentist case), or a strong evidence threshold (e.g., if a sequential Bayes factor design is implemented)“, 5. grading is independent of successful publication or statistical significance.
  • Finally, as member of any committee or editorial board, I will promote the values of open science.

tl;dr
I embrace the current methodological revolution in psychology/science and signed the Commitment to Research Transparency and Open Science.

References

  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366. doi: 10.1177/0956797611417632
  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2012). A 21 word solution. Retrieved from http://ssrn.com/abstract=2160588 or http://dx.doi.org/10.2139/ssrn.2160588
  • The „Commitment to Research Transparency“ logo is licensed by Tobias Kächele, Lena Schiestel and Felix Schönbrodt under a Creative Commons Attribution 4.0 International License.

Meta-heuristics in short scale construction

Reference. Schroeders, U., Wilhelm, O., & Olaru, G. (2016). Meta-heuristics in short scale construction: Ant Colony Optimization and Genetic Algorithm. PloS One, 11, e0167110. doi.org/10.1371/journal.pone.0167110

Abstract. The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored userdefined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.

Comment. This is my first Open Access pubclication, funded by the University of Bamberg. With respect to Open Material, all syntax used is published on my GitHub-repository. Finally, in this paper data from the National Educational Panel Study (NEPS): Starting Cohort 4-9th Grade, doi:10.5157/NEPS:SC4:4.0.0. is used, that is, Open Data. Thus, hat trick: Open Access, Open Materials, and Open Data.

Ecological momentary assessment of digital literacy: Influence of fluid and crystallized intelligence, domain-specific knowledge, and computer usage

IntelligenceReference. Moehring, A., Schroeders, U., Leichtmann, B., & Wilhelm, O. (2016). Ecological momentary assessment of digital literacy: Influence of fluid and crystallized intelligence, domain-specific knowledge, and computer usage. Intelligence, 59, 170–180. http://dx.doi.org/10.1016/j.intell.2016.10.003

Abstract. The ability to comprehend new information is closely related to the successful acquisition of new knowledge. With the ubiquitous availability of the Internet, the procurement of information online constitutes a key aspect in education, work, and our leisure time. In order to investigate individual differences in digital literacy, test takers were presented with health-related comprehension problems with task-specific time restrictions. Instead of reading a given text, they were instructed to search the Internet for the information required to answer the questions. We investigated the relationship between this newly developed test and fluid and crystallized intelligence, while controlling for computer usage, in two studies with adults (n1 = 120) and vocational high school students (n2 = 171). Structural equation modeling was used to investigate the amount of unique variance explained by each predictor. In both studies, about 85% of the variance in the digital literacy factor could be explained by reasoning and knowledge while computer usage did not add to the variance explained. In Study 2, prior health-related knowledge was included as a predictor instead of general knowledge. While the influence of fluid intelligence remained significant, prior knowledge strongly influenced digital literacy (β=.81). Together both predictor variables explained digital literacy exhaustively. These findings are in line with the view that knowledge is a major determinant of higher-level cognition. Further implications about the influence of the restrictiveness of the testing environment are discussed.