Estimating Measurement Precision in Reduced-Length Multistage-Adaptive Testing

Authors

  • Katrina M. Crotts University of Massachusetts Amherst
  • April L. Zenisky University of Massachusetts Amherst
  • Stephen G. Sireci University of Massachusetts Amherst
  • Xueming Li University of Massachusetts Amherst

DOI:

https://doi.org/10.7333/jcat.v1i0.19

Abstract

The extent to which reducing the number of items in a multi-stage adaptive test (MST) affected measurement precision was evaluated. Using the Massachusetts Adult Proficiency Test for Reading (MAPT), a low-stakes MST in adult education, reliability, decision consistency, and decision accuracy estimates were compared for the original and reduced-length tests (from 40 to 35 items). Four different approaches were used: (1) the Spearman-Brown formula, (2) eliminating one item of average discrimination from consecutive stages, (3) completely reassembling new panels, and (4) simulating item responses to the original and shortened MSTs and comparing the standard errors of measurement for simulated examinees. Overall, results suggested comparable levels of measurement precision, improved content representation, and reduced testing time were achievable using the reduced-length tests. The Spearman-Brown estimates were surprisingly close to the estimates based on assembling new panels. Methods for assembling an MST to maintain measurement precision and practical lessons that could generalize to MSTs in other contexts are discussed.

Keywords: multi-stage adaptive testing, reliability, decision consistency, decision accuracy, response time, test development, validity.  

 

Author Biographies

  • Katrina M. Crotts, University of Massachusetts Amherst

    Katrina Crotts is a Doctoral Candidate at the University of Massachusetts Amherst in the Psychometric Methods concentration. Some of Katrina's research interests include validity theory, applications of item response theory, computer-based testing, and test accommodations and fairness issues for ELLs and students with disabilities.

  • April L. Zenisky, University of Massachusetts Amherst
    April L. Zenisky is Senior Research Fellow and Director of Computer-Based Testing Initiatives for the Center for Educational Assessment at the University of Massachusetts Amherst. Her research has been presented at national and international conferences, and her work has been published in multiple measurement journals and in several books and encyclopedias related to measurement and educational research including Cross-Cultural Research Methods (2011), Elements of Adaptive Testing (2010) and the Handbook of Test Development (2006). She currently serves as the Associate Editor of the International Journal of Testing.  Her research interests include computerized test designs, score reporting, and innovative item types for computer-based testing.
  • Stephen G. Sireci, University of Massachusetts Amherst
    Stephen G. Sireci is Professor of Educational Policy, Research, and Administration; Chair of the Psychometrics Program; and Director, Center for Educational Assessment at the University of Massachusetts Amherst.  He is also the Co-Editor of the International Journal of Testing.
  • Xueming Li, University of Massachusetts Amherst
    Xueming Li  is a Doctoral Candidate at the University of Massachusetts Amherst in the Psychometric Methods concentration.  Some of Xueming's research interests include test validity, equating, computer-based testing, scale development, and international assessment.

Downloads

Published

2013-09-11

Issue

Section

Articles