Journal of Computerized Adaptive Testing

About the Journal

JCAT is a peer-reviewed electronic journal designed to advance the science and practice of computerized adaptive testing (CAT). JCAT publishes two types of manuscripts:

  1. Empirical research reports, theoretical papers, and integrative critical reviews on topics directly related to CAT (e.g., item selection algorithms, security algorithms, multistage designs, examinee reactions to CAT, DIF in CAT, item bank development, the psychometrics of CAT) and on important ancillary topics (e.g., innovative item types, assessment engineering, psychometric models, issues surrounding the technology of adaptive testing, validity studies).
  2. Applications and implementations of CAT. These articles include descriptions of specific decisions made for a particular purpose, required by the nature of the adaptive test being developed, including (but not limited to) the nature of the testing population, the type of decisions being made with the information from the test, the size of the available item bank, the changing nature of item styles, approaches to field testing, and complex item selection procedures.

JCAT is the official journal of the International Association for Computerized Adaptive Testing.

To submit a manuscript, select the green "Information -- For Authors" link on the bottom right, and follow the instructions.

To subscribe to JCAT, select the "Information -- For Readers" link on the bottom right.  Subscriptions are free.

To access articles published in the current year, select CURRENT on the top line of any page.  To access articles from previous years, select ARCHIVES.



Dr. Duanli Yan, Director of Data Analysis and Computational Research , Educational Testing Service, U.S.A.

Consulting Editors

  • John Barnard, EPEC, Australia
  • Kirk A. Becker, Pearson VUE, United States
  • Theo Eggen, Cito and University of Twente, Netherlands
  • Andreas Frey, Friedrich Schiller University Jena, Germany
  • Kyung T. Han, Graduate Management Admission Council, United States
  • G. Gage Kingsbury, Psychometric Consultant, United States
  • Alan D Mead, Talent Algorithms Inc., United States
  • Mark D Reckase, Michigan State University, United States
  • Daniel O. Segall, PMC, United States
  • Bernard P Veldkamp, University of Twente, Netherlands
  • Wim van der Linden
  • Alina von Davier, Duolingo
  • Steven L. Wise, Northwest Evaluation Association, United States
  • Hua-hua Chang, University of Illinois Urbana-Champaign


New Article in the Journal of Computerized Adaptive Testing (JCAT)


How Do Trait Change Patterns Affect the Performance of Adaptive Measurement of Change?
Ming Him Tai, Allison W. Cooperman, Joseph N. DeWeese, and David J. Weiss

Partial Abstract

Adaptive measurement of change (AMC) is a psychometric procedure to detect intra-individual change in trait levels across multiple testing occasions. However, in studying how AMC performs as a function of change, most previous studies did not specify change patterns systematically. Inspired by Cronbach and Gleser (1953), a quantitative framework was proposed that systematically decomposes a change pattern into three components: magnitude, scatter, and shape. Shape was further decomposed into direction and order. Using monte-carlo simulations, a series of analyses of variance were performed to investi-gate how each of these components affected the false positive rates (FPRs) and true positive rates (TPRs) for detecting true change, and a change recovery index (CRI) ...

You can view the complete abstract and download the article at

Read more about New Article in the Journal of Computerized Adaptive Testing (JCAT)

Current Issue

Vol. 10 No. 3 (2023): How Do Trait Change Patterns Affect the Performance of Adaptive Measurement of Change?
Published: 2023-07-18


View All Issues