Learning progressions and learning map structures are increasingly being used as the basis for the design of large-scale assessments. Of critical importance to these designs is the validity of the map structure used to build the assessments. Most commonly, evidence for the validity of a map structure comes from procedural evidence gathered during the learning map creation process (e.g., research literature, external reviews, etc.). However, it is also important to provide support for the validity of the map structure with empirical evidence using data gathered from the assessment. In this paper, we propose a framework for the empirical validation of learning maps and progressions using diagnostic classification models. Three methods are proposed within this framework that provide different levels of model assumptions and types of inferences. The framework is then applied to the Dynamic Learning Maps® (DLM®) alternate assessment system to illustrate the utility and limitations of each method. Results show that each of the proposed methods have some limitations, but are able to provide complementary information for the evaluation of the proposed structure of content standards (Essential Elements) in the DLM assessment.
This presentation is part of a coordinated session, Beyond Learning Progressions: Maps as Assessment Architecture.
Learning progressions (LPs) are commonly used in educational assessments to identify interim steps on a pathway toward a grade-level target. LPs describe typical expected pathways, but may not represent the multiple pathways by which students develop knowledge in a domain. Another type of cognitive model, the learning map, is better suited to describing heterogeneous pathways that support learning for all students including those with the most significant cognitive disabilities. This session ties together four presentations on different facets of a project involving the creation and use of maps as cognitive learning models to support the design of large-scale assessments. The first presentation illustrates how an assessment’s theory of action and validity argument are grounded in the maps as models of the content domains. The second presentation describes the map creation process, including intentional design decisions and the application of universal design for learning principles. The third presentation describes the iterative design process and the use of stakeholder evaluation processes to evaluate the maps for content and accessibility. The fourth presentation describes empirical methods for map validation. The session ends with discussion of lessons learned and future directions, and commentary from a national expert in cognitive learning models and large-scale assessment.