Diagnostic classification models (DCMs) have grown in popularity over the past decade. However, their adoption in applied settings, especially operational assessment programs, has been minimal to slow. One potential barrier to adoption is the technical evidence recommended for all assessments in the Standards for Educational and Psychological Testing. Many of the methods widely used to provide evidence to meet these recommendations have implicit or explicit assumptions of a continuous unidimensional scale, such as those found in classical test theory and item response theory. In this paper, we describe how the use of a DCM impacts the type of technical evidence that should be provided for an assessment system, as well as methods for providing that evidence. An applied example from an operational assessment program that uses a DCM for reporting is provided, demonstrating how technical evidence can be provided for DCM-based assessments. We provide recommendations for other programs seeking to adopt a diagnostic assessment.
This presentation is part of a coordinated session, Diagnostic Assessments: Moving from Theory to Practice.
In recent years, there has been a call for assessments to provide increasingly detailed and actionable scores, while simultaneously decreasing overall testing time. This demand is an incredible challenge for the educational assessment community, but one that is answerable through the use of diagnostic assessments and diagnostic classification models (DCMs). Despite these benefits, DCMs have not been widely adopted for use in operational settings. This session ties together four papers that describe, in practical terms, how to design, implement, and support the use of DCM-based diagnostic assessments for operational use. The first presentation illustrates how assessments and items can be designed to elicit fine-grained diagnostic information about students, rather than assessing a single latent trait. The second presentation discusses the decision-making process involved with DCM model building, model selection, and practical model fit considerations. The third presentation illustrates how the scores from a diagnostic assessment can be reported in a meaningful way to support actionable next steps. The fourth presentation describes how traditional psychometric methods can be revised in order to provide technical documentation that is required of any operational assessment. The session ends with commentary from a national expert in diagnostic models and their use in applied settings.