The (non)Impact of Misfitting Items in Computerized Adaptive Testing
Abstract
To assess the potential impact of misfitting items, simulated examinees received varying percentages of misfitting items. The fit was manipulated to be poor near what would otherwise be the point of maximum information. With 30% misfitting items, ability estimates tended to have more positive bias in the middle ability range, and more negative bias in the high and low ability ranges, than they did with 0% or 10% misfitting items. However, the magnitude of this effect was small. For most abilities and test lengths, the empirical standard error did not vary with the percentage of misfitting items. The standard error estimated from the information function tended to underestimate the empirical standard error for the shortest test length and overestimate it for the longer test lengths, regardless of the percentage of misfitting items. Overall, the misfit had little practical impact.
Downloads
Published
Issue
Section
License
Authors who publish in JCAT agree to the following terms:
- Authors retain copyright and grant the journal right of first publication.
- Articles may be copied and reproduced for academic and research purposes with no cost.
- All other reproduction requires permission of the author(s).
- If the authors cannot be contacted, permission can be requested from IACAT.
- Authors may enter into separate contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., publish it in a book), with an acknowledgement of its initial publication in JCAT.