Meta-analysis proctored vs. unproctored assessment

Our meta-analysis – Steger, Schroeders, & Gnambs (2018) – comparing test-scores of proctored vs. unproctored assessment is now available as online first publication and sometime in the future to be published in the European Journal of Psychological Assessment. In more detail, we examined mean score differences and correlations between both assessment contexts with a three-level random-effects meta-analysis based on 49 studies with 109 effect sizes. We think this is a timely topic since web-based assessments are frequently compromised by a lack of control over the participants’ test-taking behavior, but researchers are nevertheless in the need to compare the data obtained through unproctored test conditions with data from controlled settings. The inevitable question is to what extent such a comparison is feasible.

To prevent participants from cheating, test administrators have invented a ragbag of countermeasures such as honesty contracts or follow-up verification tests which also made it into general testing guidelines (International Test Commission, 2006). For example, test-takers have to sign honesty contracts that threaten to punish cheating before the actual assessment take place to raise participants’ commitment and conscientiousness. So, do you think this strategy pan out? Unfortunately, our moderator analyses showed no significant effect for the implementation of countermeasures against cheating at all. Despite the vast body of research that advocates the implementation of countermeasures, we found no empirical evidence for their effectiveness. It is disheartening to think that they are still treated as cure against cheating in prominent testing guidelines.

However, this post is less about the findings of the meta-analysis, but a request to contribute your research. Although we did our best, to cover the complete range of studies - for example, by sending a request to various mailing lists (DGPs, AIR, GIR-L), ResearchGate, and the Psych Meth Discussion Group on Facebook - we think there still might be some studies out there that felt through the grid. And there are some interesting new development of interactive meta-analysis such as MetaLab we want to try out. Thus, we reached out to our fellow researchers on ResearchGate kindly asking for their assistance. And I want to reiterate the call on my website:

Dear colleagues,
It may seem like we can—now that the paper is done and published—file away this project and move on to new adventures. But truth be told, we’re still curious! So, in case you have a study (published, forthcoming, or in your file drawer) that fits our inclusion criteria, please let us know!
Here are our main two inclusion criteria again:
The study…
(a) reported a comparison of test scores obtained in a (remotely) proctored setting (e.g., lab, supervised classroom or test center) versus an unproctored setting (e.g., web-based, unsupervised assessment at home),
and
(b) administered cognitive ability measures.
If you think your study also met the criteria, then give us a hint. We are looking forward to hearing from you!
We’ll keep you posted of any new developments and insights!
Diana Steger, Ulrich Schroeders, & Timo Gnambs

You can contact any author you like.

In case of doubt, please write to Diana. 🙂


References