The (non)Impact of Misfitting Items in Computerized Adaptive Testing

Authors

  • Christine E. DeMars James Madison University

Abstract

To assess the potential impact of misfitting items, simulated examinees received varying percentages of misfitting items. The fit was manipulated to be poor near what would otherwise be the point of maximum information. With 30% misfitting items, ability estimates tended to have more positive bias in the middle ability range, and more negative bias in the high and low ability ranges, than they did with 0% or 10% misfitting items. However, the magnitude of this effect was small. For most abilities and test lengths, the empirical standard error did not vary with the percentage of misfitting items. The standard error estimated from the information function tended to underestimate the empirical standard error for the shortest test length and overestimate it for the longer test lengths, regardless of the percentage of misfitting items. Overall, the misfit had little practical impact.

Author Biography

  • Christine E. DeMars, James Madison University

    Christine E. DeMars serves at James Madison University as a professor in the department of graduate psychology and a senior assessment specialist in the Center for Assessment and Research Studies. She teaches courses in item response theory, classical test theory, and generalizability theory, and supervises PhD students. Her research interests include applied and theoretical topics in item response theory, differential item functioning, test-taking motivation, and other issues in operational testing.

Downloads

Published

2022-11-02