A Literature Review of Through-Course Summative Assessment Models: The Case for an Adaptive Through-Year Assessment

Authors

  • Garron Gianopulos NWEA

Abstract

This review describes various approaches and expected benefits of through-course summative assessment (TCSA) and many of the challenges associated with TCSA models, concluding with a case for why a through-year computerized adaptive test (TY-CAT) would solve many of the challenges. The central feature of TCSA models is that they combine scores from tests administered at different time points of the school year (U.S. Department of Education, 2010). The expected benefits of TCSAs are numerous, including finer-grained feedback due to an increase in the cumulative number of items (Preston & Moore, 2010); increased time to include and score performance tasks, which is expected to increase the content validity of summative scores (Bennett et al., 2011); increased curricular and assessment coherence (Wilson & Sloane, 2000); timely feedback (Wise, 2011); and potentially reduced measurement error (Wise, 2011). The question of whether a set of assessments could be administered throughout the school year and combined to replace a single end-of-year summative test used for accountability has been considered before. The Partnership for Assessment of Readiness for College and Career (PARCC) was considered a through-course summative design (Jerald et al., 2011). Although PARCC’s proposed design created much interest initially, it brought technical challenges, and the design was changed to a more traditional summative assessment. This literature review aims to evaluate different TCSAs in the literature to learn if alternative designs, especially CAT designs, might overcome some of the technical challenges. Three blueprint designs are discussed: distributed, cumulative, and repeated comprehensive. The advantages and limitations of each blueprint and associated score aggregation methods are considered, and both technical challenges and possible solutions are reviewed. The paper concludes by considering how an interim-summative hybrid CAT addresses many of the technical challenges of TCSAs.

Author Biography

  • Garron Gianopulos, NWEA

    Garron Gianopulos is a learning and assessment engineer at NWEA. Dr. Gianopulos has a broad interest in the practical application of IRT in the development of formative, interim, and summative assessments, and his latest research interests have focused on the use of explanatory IRT, structural equation modeling, and data visualization techniques to validate theories of learning. Prior to joining NWEA in 2018, he was a psychometrician at North Carolina State University, the North Carolina Department of Public Instruction (NCDPI), Professional Testing Inc., and the University of South Florida. While at NCDPI, Dr. Gianopulos led the development of end-of-year and end-of-course summative assessments in mathematics and interim assessments in multiple subjects. As a psychometrician at North Carolina State, he supported the development of formative diagnostic mathematics assessments centered around learning trajectories. Dr. Gianopulos also served on the North Carolina TAC until his transition to NWEA. Dr. Gianopulos holds a doctorate in curriculum and instruction with an emphasis in educational measurement and evaluation with a cognate in psychometrics from the University of South Florida.

Downloads

Published

2025-03-04