The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Before deploying a new schema, verify it correctly handles every case the old schema handled successfully—a schema update that drops coverage of previous successes is a regression, not an upgrade.
Inventory specific situations where your old schema produced accurate predictions, then test whether your new schema also handles those cases before replacing the old model.
Archive old schemas with documentation of what they handled well rather than deleting them, because they encode information about what works under specific conditions that new schemas should incorporate.
Interpret the physical discomfort of encountering evidence that contradicts a deeply held schema as the cost of cognitive growth rather than as proof the evidence is flawed.
Separate the documentation of disconfirming evidence from your emotional response to it by externalizing each in parallel columns, creating objects you can examine rather than a fused defensive reaction.
Set explicit tolerance windows when encountering schema-threatening evidence—start with sixty seconds of sitting with discomfort before rushing to resolution—because premature resolution favors existing schemas over new evidence.
Expect a second wave of discomfort after initial schema update when you realize the full implications for past decisions—this delayed recognition is where most people abandon updates and revert to old schemas.
Define observable trigger conditions for schema review before anomalies accumulate rather than waiting for subjective feelings of uncertainty to prompt reconsideration.
Track prediction accuracy against pre-defined thresholds rather than relying on intuitive assessment of model quality, because humans systematically underweight evidence that contradicts their existing beliefs.
Set calendar-based schema review schedules independent of observed performance, because temporal drift occurs whether or not conscious errors are detected.
Accumulate and threshold anomalies rather than evaluate each contradiction individually, because isolated anomalies can be dismissed while clusters reveal systematic model failure.
Create anomaly logs separate from regular notes, because anomalies require dedicated collection systems to prevent dismissal through rationalization or burial in routine information.
Assign faster review cadences to schemas in complex or chaotic domains than their pace-layer position alone suggests, because unpredictability generates more frequent invalidating evidence.
Review high-dependency schemas more slowly and deliberately than peripheral schemas, because revising foundational schemas requires updating all dependent schemas in a cascading revision process.
Change shared schemas through shared experiences rather than shared information presentations, because behavioral coordination requires updated local models built from practice, not stated agreement.
Create boundary objects that embody new schemas in visible artifacts, because people adopt schemas they can see and use rather than schemas they were told about.
Map what a shared schema currently supports before proposing changes, because schemas embedded in coordination patterns create dependencies that must be managed during transitions.
Distinguish between increasing complexity that improves explanatory power and complexity that merely preserves a failing model, because the latter signals the need for architectural replacement rather than parameter tuning.
Replace schemas when the framework cannot formulate questions that reality demands, because inability to express necessary questions reveals incommensurability between model and domain.
Generate multiple alternative frameworks rather than jumping to a single replacement, because competitive testing between candidate models produces better selection than loyalty contests.
Write forward-looking predictions from each candidate schema to create testable competition, because prospective prediction testing reveals model quality better than retrospective fit evaluation.
Delayed schema updates impose compounding costs because the gap between your model and a changing reality widens incrementally over time, making each decision progressively less optimal.
When expertise becomes locked into a familiar schema, attentional resources get captured by schema-consistent features, making genuinely better alternatives perceptually invisible even when they are objectively present.
Apply the zero-base test to detect schema rigidity: if you would not adopt the same belief as a newcomer encountering the situation fresh today, the schema persists through inertia rather than current validity.