The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
When practicing observation at any difficulty level, stay at that level until the trigger becomes boring rather than advancing on a fixed schedule, because boredom signals that automatic judgment has been replaced by automatic observation at that complexity level.
When conducting week-long observation audits of high-stakes domains, structure daily entries with physically separated sections for 'what I observed' and 'what my mind wanted to conclude,' keeping both sections visible simultaneously to train recognition of the observation-evaluation gap.
When a behavior, reaction, or outcome recurs three or more times across a 30-day period, classify it as a pattern candidate requiring structural analysis rather than treating it as coincidence.
Name behavioral patterns using 2-4 word descriptors that compress the full trigger-response sequence into a recognizable label, enabling real-time pattern recognition under cognitive load.
For each named pattern in your Pattern Dictionary, document three required fields: the pattern name, its observable trigger conditions, and the default behavioral response it produces.
When identifying meta-patterns, require each second-order claim to ground in at least three documented first-order pattern instances to distinguish genuine meta-patterns from intellectual speculation.
Allocate pattern-change effort to second-order interventions (changing how patterns form) over first-order fixes (changing individual patterns) when three or more first-order patterns share formation or dissolution characteristics.
When reviewing notes for patterns, read through all entries without editing or organizing first, then extract recurring themes on a separate page, to prevent premature categorization from filtering out emergent structures.
Before claiming to understand a complex topic, generate at least three concrete examples from different domains that instantiate the concept, and if you cannot produce varied examples, treat this as evidence that you have acquired vocabulary without understanding.
After accumulating 15-20 judgments in the same domain, analyze whether errors cluster directionally (bias requiring correction factor) or scatter randomly (noise requiring aggregation).
Conduct a two-week bias journal recording significant judgments with confidence levels, then categorize errors by direction and type to build your personal bias profile.
For each identified bias in your profile, write a specific pre-correction question or procedure to execute before acting on judgments in that domain.
After four weeks of belief tracking, examine whether beliefs barely moved despite evidence (conservatism) or swung dramatically on single data points (base rate neglect) to identify domain-specific updating patterns.
Before interpreting any piece of information—a message, a metric, a statement, a data point—run a five-question context scan: What environment am I in? What role am I occupying? What just happened that might color my perception? What are the goals (mine and others')? What assumptions am I importing from a different context?
Set the threshold for decision context documentation at any choice where you deliberated between options for more than sixty seconds, because if you considered alternatives consciously, the reasoning is worth preserving against memory reconstruction.
When recall of studied material fails, mentally reinstate the original encoding context—room, time of day, task being done, emotional state—before concluding the information wasn't learned or needs re-studying.
When a reasoning chain contains no surprises or pauses during construction—no moments where the next link was weaker than expected—you have transcribed conclusions rather than constructed reasoning and should restart with genuine step-by-step building.
Test AI integration by verifying whether interactions increase your independent understanding—if you cannot reconstruct the reasoning without the AI, the tool is replacing cognition rather than extending it.
Before attempting to design better schemas, inventory your current operating schemas by writing what you actually do (not what you should do) across professional, relational, and self-concept domains.
Perform schema inspection through a five-step audit: (1) list actual operating rules in a domain, (2) source each rule's origin, (3) identify one success and one failure case per rule, (4) rate confidence in each rule, (5) compare confidence to evidence quality.
When using AI to assist schema inspection, first externalize your thinking in writing, then request assumption extraction, pattern detection across entries, or adversarial questioning—because AI can only inspect articulated schemas, not internal ones.
After accumulating multiple validation records, analyze them as a set to identify recurring patterns in how your schemas fail—same blind spot, same overconfidence direction, same boundary condition—that no single test reveals.
Schedule schema reviews at cadences matched to environmental volatility: weekly/biweekly for high-change contexts, monthly/quarterly for stable contexts, plus triggered reviews when surprises occur.
Treat surprising outcomes as automatic triggers for schema review rather than waiting for scheduled validation cycles, as surprise signals that at least one schema in your stack has drifted from reality.