Termination Criteria in Computerized Adaptive Tests: Do Variable-Length CATs Provide Efficient and Effective Measurement?
DOI:
https://doi.org/10.7333/jcat.v1i1.16Abstract
This simulation study examined a number of computerized adaptive testing (CAT) termination rules using the item response theory framework. Results showed that longer CATs yielded more accurate trait estimation, but there were diminishing returns with a very large number of items. Standard error termination performed quite well in terms of both administering a small number of items and having high accuracy of trait estimation if the standard error level used was low enough, but it was sensitive to the item bank information structure. Change in estimated theta performed comparably to standard error termination, but was less sensitive to the bank information structure. Fixed-length CATs performed either slightly worse or comparable to their variable-length termination counterparts; previous findings stating that variable-length CATs are biased were the result of artifacts, which are discussed. Recommendations for CAT termination are provided.
Downloads
Additional Files
Published
Issue
Section
License
Authors who publish in JCAT agree to the following terms:
- Authors retain copyright and grant the journal right of first publication.
- Articles may be copied and reproduced for academic and research purposes with no cost.
- All other reproduction requires permission of the author(s).
- If the authors cannot be contacted, permission can be requested from IACAT.
- Authors may enter into separate contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., publish it in a book), with an acknowledgement of its initial publication in JCAT.