<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Measurement Invariance on </title>
    <link>https://ulrich-schroeders.de/tags/measurement-invariance/</link>
    <description>Recent content in Measurement Invariance on </description>
    <generator>Hugo</generator>
    <language>en</language>
    <copyright>Ulrich Schroeders — All rights reserved.</copyright>
    <lastBuildDate>Wed, 05 Nov 2025 17:32:00 +0200</lastBuildDate>
    <atom:link href="https://ulrich-schroeders.de/tags/measurement-invariance/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Method-Toolbox</title>
      <link>https://ulrich-schroeders.de/fixed/method-toolbox/</link>
      <pubDate>Wed, 05 Nov 2025 17:32:00 +0200</pubDate>
      <guid>https://ulrich-schroeders.de/fixed/method-toolbox/</guid>
      <description>&lt;h2 id=&#34;sample-size-estimation-in-item-response-theory&#34;&gt;Sample size estimation in Item Response Theory&lt;/h2&gt;&#xA;&#xD;&#xA;    &lt;img class=&#34;article-image&#34; width=&#34;300px&#34; height=&#34;300px&#34; src=&#34;https://ulrich-schroeders.de/img/staircase_yellow.jpg&#34; alt=&#34;&#34; style=&#34;float: right; margin: 7px 5px 10px 15px; border-radius: 5px&#34;&gt;&#xD;&#xA;    &#xD;&#xA;&#xD;&#xA;&#xA;&lt;p&gt;Although Item Response Theory (IRT) models offer well-established psychometric advantages over traditional scoring methods, they have been largely confined to specific areas of psychology, such as educational assessment and personnel selection, while their broader potential remains underutilized in practice. One reason for this is the challenge of meeting the (presumed) larger sample size requirements, especially in complex measurement designs. Accurate a priori sample size estimation is essential for obtaining accurate estimates of item/person parameters, effects, and model fit. As such, it serves as an essential tool for effective study planning, especially in pre-registration and registered reports.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Science self-concept – More than the sum of its parts?</title>
      <link>https://ulrich-schroeders.de/2020/03/science-self-concepts/</link>
      <pubDate>Sat, 21 Mar 2020 22:26:02 +0000</pubDate>
      <guid>https://ulrich-schroeders.de/2020/03/science-self-concepts/</guid>
      <description>&lt;p&gt;The article &amp;ldquo;Science Self-Concept – More Than the Sum of its Parts?&amp;rdquo; has now been published in &amp;ldquo;The Journal of Experimental Education&amp;rdquo; (btw in existence since 1932). The first 50 &lt;a href=&#34;https://www.tandfonline.com/eprint/TUMRQUZW6WNBCKNGPU5H/full?target=10.1080/00220973.2020.1740967&#34;&gt;copies are free&lt;/a&gt;, in case you are interested.&lt;/p&gt;&#xA;&lt;blockquote class=&#34;twitter-tweet&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;My first preprint. 😀Is a general science self-concept equivalent to an aggregated subject-specific science concept? It&amp;#39;s about different modeling approaches, measurement invariance and concepts of equivalence. Check it out! Comment if you like: &lt;a href=&#34;https://t.co/3STwiTV0Up&#34;&gt;https://t.co/3STwiTV0Up&lt;/a&gt; &lt;a href=&#34;https://t.co/SfbYxuHfse&#34;&gt;pic.twitter.com/SfbYxuHfse&lt;/a&gt;&lt;/p&gt;&amp;mdash; Ulrich Schroeders (@Navajoc0d3) &lt;a href=&#34;https://twitter.com/Navajoc0d3/status/1192091948657631232?ref_src=twsrc%5Etfw&#34;&gt;November 6, 2019&lt;/a&gt;&lt;/blockquote&gt;&#xA;&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&#xA;&#xA;&#xA;&lt;p&gt;In comparison to the preprint version, some substantial changes have been made to the final version of the manuscript, especially in the research questions and in the presentation of the results. Due to word restriction, we also removed a section from the discussion, in which we summarized differences and commonalities of the bifactor vs. higher-order models. We also speculated about why the type of modeling may also depend on the study&amp;rsquo;s subject, that is, on conceptual differences in intelligence vs. self-concept research. The argumentation may be a bit wonky, but at least I find the idea so persuasive that I want to reproduce it in the following. If you have any comments, please feel free to drop me a line.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Testing for equivalence of test data across media</title>
      <link>https://ulrich-schroeders.de/2019/04/equivalence/</link>
      <pubDate>Sun, 28 Apr 2019 19:26:02 +0000</pubDate>
      <guid>https://ulrich-schroeders.de/2019/04/equivalence/</guid>
      <description>&lt;p&gt;&#xD;&#xA;    &lt;img class=&#34;article-image&#34; src=&#34;https://ulrich-schroeders.de/img/road.jpg&#34; alt=&#34;&#34; style=&#34;border-radius: 5px&#34;&gt;&#xD;&#xA;    &#xD;&#xA;&#xD;&#xA;&#xA;In 2009, I wrote a small chapter that was part of an EU conference book on the transition to computer-based assessment. Now and then I&amp;rsquo;m coming back to this piece of work - in my teaching and my publications (e.g., the &lt;a href=&#34;http://ulrich-schroeders.de/2010/10/smartphone-testing/&#34;&gt;EJPA paper&lt;/a&gt; on testing reasoning ability across different devices). Now I want to make it publically available. Hopefully, it will be interesting to some of you. The chapter is the (unaltered) preprint version of the book chapter, so if you want to cite it, please use the following citation:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Pitfalls in measurement invariance testing</title>
      <link>https://ulrich-schroeders.de/2019/01/df-mgcfa/</link>
      <pubDate>Fri, 04 Jan 2019 09:26:02 +0000</pubDate>
      <guid>https://ulrich-schroeders.de/2019/01/df-mgcfa/</guid>
      <description>&lt;img class=&#34;article-image&#34; src=&#34;https://ulrich-schroeders.de/img/building.jpg&#34; alt=&#34;&#34; style=&#34;border-radius: 5px&#34;&gt;&#xD;&#xA;    &#xD;&#xA;&#xD;&#xA;&#xA;&lt;p&gt;In a &lt;a href=&#34;https://doi.org/10.1027/1015-5759/a000500&#34;&gt;new paper&lt;/a&gt; in the &lt;em&gt;European Journal of Psychological Assessment&lt;/em&gt;, Timo Gnambs and I examined the soundness of reporting measurement invariance (MI) testing in the context of multigroup confirmatory factor analysis (MGCFA). Of course, there are several good primers on MI testing (e.g., &lt;a href=&#34;https://doi.org/10.1207/S15328007SEM0902_5&#34;&gt;Cheung &amp;amp; Rensvold, 2002&lt;/a&gt;; &lt;a href=&#34;https://doi.org/10.1111/j.1745-3992.2010.00182.x&#34;&gt;Wicherts &amp;amp; Dolan, 2010&lt;/a&gt;) and textbooks that elaborate on the theoretical base (e.g., &lt;a href=&#34;https://www.amazon.com/Statistical-Approaches-Measurement-Invariance-Millsap/dp/1848728190&#34;&gt;Millsap, 2011&lt;/a&gt;), but a clearly written tutorial with example syntax how to implement MI practically was still missing. In the first part of the paper, we demonstrate that a sobering large amount of reported degrees of freedom &lt;em&gt;(df)&lt;/em&gt; do not match with the &lt;em&gt;df&lt;/em&gt; recalculated based on information given in the articles. More specifically, we both reviewed 128 studies including 302 measurement invariance MGCFA testing procedures from six leading peer-reviewed journals that focus on psychological assessment and on a regular base. Overall, about a quarter of all articles included at least one discrepancy with some systematic differences between the journals. However, it was interesting to see that the metric and scalar step of invariance testing were more frequently affected.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Longitudinal measurement invariance testing with categorical data</title>
      <link>https://ulrich-schroeders.de/2018/09/long-MI-categorical/</link>
      <pubDate>Sun, 30 Sep 2018 12:26:02 +0000</pubDate>
      <guid>https://ulrich-schroeders.de/2018/09/long-MI-categorical/</guid>
      <description>&lt;p&gt;&#xD;&#xA;    &lt;img class=&#34;article-image&#34; src=&#34;https://ulrich-schroeders.de/img/staircase.jpg&#34; alt=&#34;&#34; style=&#34;border-radius: 5px&#34;&gt;&#xD;&#xA;    &#xD;&#xA;&#xD;&#xA;&#xA;In a recent paper – &lt;a href=&#34;https://doi.org/10.1177/0165025416687412&#34;&gt;Edossa, Schroeders, Weinert, &amp;amp; Artelt, 2018&lt;/a&gt; – we came across the issue of longitudinal measurement invariance testing with categorical data. There are quite good primers and textbooks on longitudinal measurement invariance testing with continuous data (e.g., &lt;a href=&#34;https://www.amazon.de/Data-Analysis-Mplus-Methodology-Sciences-ebook/dp/B00FP531ME&#34;&gt;Geiser, 2013&lt;/a&gt;). However, at the time of writing the manuscript there wasn&amp;rsquo;t an application of measurement invariance testing in the longitudinal run with categorical data. In case your are interest in using such an invariance testing procedure, we uploaded the &lt;a href=&#34;https://github.com/ulrich-schroeders/syntax-publications/blob/master/2018_IJBD_long_MI.r&#34;&gt;&lt;strong&gt;R syntax&lt;/strong&gt;&lt;/a&gt; for all measurement invariance steps.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Equivalence of screen versus print reading comprehension depends on task complexity and proficiency</title>
      <link>https://ulrich-schroeders.de/2017/08/pc-vs-pp-reading/</link>
      <pubDate>Sat, 12 Aug 2017 07:01:14 +0000</pubDate>
      <guid>https://ulrich-schroeders.de/2017/08/pc-vs-pp-reading/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Reference.&lt;/strong&gt; Lenhard, W., Schroeders, U., &amp;amp; Lenhard, A. (2017). Equivalence of screen versus print reading comprehension depends on task complexity and proficiency. &lt;em&gt;Discourse Processes, 54(5-6)&lt;/em&gt;, 427–445. doi: &lt;a href=&#34;https://doi.org/10.1080/0163853X.2017.1319653&#34;&gt;https://doi.org/10.1080/0163853X.2017.1319653&lt;/a&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Abstract.&lt;/strong&gt; As reading and reading assessment become increasingly implemented on electronic devices, the question arises whether reading on screen is comparable with reading on paper. To examine potential differences, we studied reading processes on different proficiency and complexity levels. Specifically, we used data from the standardization sample of the German reading comprehension test ELFE II (&lt;em&gt;n&lt;/em&gt; = 2,807), which assesses reading at word, sentence, and text level with separate speeded subtests. Children from grades 1 to 6 completed either a test version on paper or via computer under time constraints. In general, children in the screen condition worked faster but at the expense of accuracy. This difference was more pronounced for younger children and at the word level. Based on our results, we suggest that remedial education and interventions for younger children using computer-based approaches should likewise foster speed and accuracy in a balanced way.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
