Learning progressions and learning map structures are increasingly being used as the basis for the design of large-scale assessments. Of critical importance to these designs is the validity of the map structure used to build the assessments. Most commonly, evidence for the validity of a map structure comes from procedural evidence gathered during the learning map creation process (e.g., research literature, external reviews, etc.). However, it is also important to provide support for the validity of the map structure with empirical evidence using data gathered from the assessment. In this paper, we propose a framework for the empirical validation of learning maps and progressions using diagnostic classification models. Three methods are proposed within this framework that provide different levels of model assumptions and types of inferences. The framework is then applied to the Dynamic Learning Maps® (DLM®) alternate assessment system to illustrate the utility and limitations of each method. Results show that each of the proposed methods have some limitations, but are able to provide complementary information for the evaluation of the proposed structure of content standards (Essential Elements) in the DLM assessment.