The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Track override frequency as diagnostic data—when multiple children override the same parent property, restructure the hierarchy rather than accumulating exceptions.
Restructure hierarchies at the point of friction rather than proactively across the entire system to keep costs small and signals fresh.
Flatten hierarchies before deepening them—add nesting levels only when flat organization with rich cross-references proves insufficient.
When cross-cutting concerns dominate your hierarchy, reorganize along a different axis rather than forcing items into categories they only partially fit.
Design your primary hierarchy to optimize for the access pattern you use most frequently, then use tags, links, or AI search to surface alternative organizations on demand.
Generate alternative hierarchical organizations of the same data periodically to identify blind spots in your default structure.
Recognize that choosing a primary hierarchy is a priority decision that reveals what you are optimizing for by making certain retrievals easy and others difficult.
Design every information artifact with explicit compression layers where each level provides sufficient context to decide whether to drill deeper, following the pattern: executive summary (30 seconds), section abstracts (5 minutes), full detail (on demand).
When presenting complex information to diverse audiences, create multiple entry points at different abstraction levels rather than a single linear document, allowing each reader to start at their appropriate depth.
Contain information within a document when the reader must understand it to grasp the parent context; reference information when it serves multiple contexts, changes independently, or would bloat the parent beyond usability.
When building retrieval systems, limit each hierarchical level to 5-8 choices and total depth to 3-4 levels, as exceeding these bounds degrades both retrieval performance and cognitive processing regardless of content quality.
When multiple valid hierarchies exist for the same data, choose the one that surfaces what you need to act on most frequently at the top level, not what seems most 'logical' or academically correct.
Articulate schemas as falsifiable claims with specified conditions, measurements, and thresholds before exposing them to evidence.
Seek disconfirming evidence first when testing schemas, because cognitive default is confirmation-seeking and only disconfirming evidence can distinguish valid from invalid models.
Restate personal identity schemas as behavioral predictions with specified conditions to convert unfalsifiable beliefs into testable hypotheses.
When auxiliary hypotheses are adjusted after prediction failure, require the adjustment to generate new testable predictions rather than merely explain away the original failure.
Design experiments where you specify the falsification criteria and record predictions before running the test, because memory reconstruction will otherwise align past beliefs with known outcomes.
Record predictions with specified confidence levels before outcomes are known, then track calibration across predictions to identify domains of systematic over- or under-confidence.
Diagnose which component of your schema failed by examining the pattern of prediction errors rather than treating each failure in isolation.
Require schemas to make specific, falsifiable predictions about future observations rather than merely explaining past events post-hoc.
Treat prediction errors as diagnostic information about schema structure rather than evidence about personal capability, to maintain the psychological conditions that allow learning from failure.
Document the schema that generated a failed prediction along with the prediction itself, so you can identify which specific assumptions or variables were wrong rather than just noting the error.
When adjusting schemas after prediction failures, distinguish between calibration adjustments (schema correct in structure, wrong in parameters) and structural revisions (missing variables or wrong causal model).
Design validation tests to target the boundaries where your schema is most likely to break rather than the center where it is most likely to hold.