The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
When a schema triggers defensiveness at the suggestion of testing it, treat that emotional response as a diagnostic signal of high psychological investment requiring especially rigorous validation.
Before acting on a schema in any consequential way, test it through concrete action at the smallest possible scale first and observe actual results against pre-stated predictions.
When an edge case breaks your schema, extract the implicit boundary condition that the edge case revealed rather than dismissing the edge case as an irrelevant exception.
Before explaining your schema to another person, frame your request as 'tell me where this breaks' rather than 'do you agree' to shift the conversation from validation theater to genuine testing.
Validate each atomic component of a compound schema independently before trusting the complete structure, because compound failures provide no diagnostic information about which component broke.
After accumulating multiple validation records, analyze them as a set to identify recurring patterns in how your schemas fail—same blind spot, same overconfidence direction, same boundary condition—that no single test reveals.
Reformulate validated schemas with explicit boundary clauses that specify the conditions under which they were tested and the conditions under which they remain untested.
Schedule schema reviews at cadences matched to environmental volatility: weekly/biweekly for high-change contexts, monthly/quarterly for stable contexts, plus triggered reviews when surprises occur.
Treat surprising outcomes as automatic triggers for schema review rather than waiting for scheduled validation cycles, as surprise signals that at least one schema in your stack has drifted from reality.
When validating schemas about personal capability or performance, include external observer ratings alongside self-assessment to detect systematic overconfidence blind spots that introspection cannot reveal.
Run a pre-mortem on each critical schema by specifying what the early warning signs would look like if that model is becoming obsolete, then check whether you have already seen some of those signs.
Set prediction failure thresholds as numeric ratios (X failures out of Y recent predictions) for each schema before observing prediction outcomes to trigger review when the threshold is crossed.
Define environmental change triggers by listing key assumptions underlying each schema and specifying observable indicators that each assumption no longer holds.
Accumulate anomalies (observations that don't fit the schema) in a running list and trigger full schema review when the count reaches a pre-defined threshold, rather than treating each anomaly as requiring immediate action.
Create a dedicated anomaly log separate from regular notes where each entry records what you expected, what happened, and which schema generated the expectation.
Assign review cadences to schemas based on their pace layer—weekly to monthly for fashion/commerce layers in complex domains, quarterly for infrastructure layer, annually for governance layer, and only on anomaly for culture/nature layers with high dependency depth.
In complex or chaotic Cynefin domains, increase schema review frequency beyond what the pace layer suggests because unpredictability generates more frequent anomalies requiring evaluation.
Schedule schema reviews on actual calendars at the assigned cadence rather than relying on subjective feelings of uncertainty to prompt reconsideration.
Treat any schema that has gone six months without deliberate review the same as a software dependency unupdated for six months - not necessarily broken but requiring verification before continued reliance.
When a schema cannot specify any observation that would falsify it, classify it as a belief system rather than a testable model and flag it for replacement or constraint.
In your schema inventory, require behavioral proof by identifying three decisions from the last month that each schema governed—if you cannot find three, reclassify the schema as aspirational rather than operational.
For each important schema, map both its prerequisites (what it depends on) and its dependents (what depends on it), then flag schemas appearing most frequently as dependencies for regular review.
For each schema you operate on, document source provenance in a single field—specific person, book, cultural norm, direct experience, or unknown—then prioritize verification effort by source weakness.
When AI assistants suggest frameworks or schemas, respond by asking for original research sources, boundary conditions, and strongest counterarguments rather than accepting or rejecting the claim directly.