The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Treat explanations of your schemas to others as active validation tests rather than passive information transfer, using objections as diagnostic data about hidden assumptions.
Seek interlocutors with different experiential bases and cognitive frameworks from your own rather than those who share your priors when validating schemas.
Design action-based schema tests with falsifiable predictions, defined actions, observation protocols, and pre-specified revision triggers to distinguish genuine testing from confirmation theater.
Decompose complex schemas into independently testable atomic claims and validate each atom before trusting the compound structure.
Validate schemas at progressively larger scales, holding constant what has already been tested while introducing one new variable or context at a time.
Make small, affordable tests of schema components where failure produces information rather than catastrophe, treating each test as an option to continue, revise, or abandon.
Actively search for conditions under which your schema would fail and test those conditions specifically, rather than testing only where success is likely.
Define in advance what evidence would falsify your schema and commit to that standard before collecting data, preventing post-hoc rationalization of ambiguous results.
Structure your knowledge system to preserve and surface contradictions, counterarguments, and disconfirming evidence rather than curating only supporting material.
Attack your highest-confidence schemas most aggressively, because confidence indicates attachment that blinds you to flaws.
Shift validation questions from future-conditional ('what could go wrong?') to past-definite ('it failed—why?') to bypass motivated reasoning.
Allocate validation effort by multiplying probability of being wrong by impact of being wrong, not by treating all uncertainties equally.
Generate multiple independent observable consequences of a schema, then evaluate whether those consequences converge or diverge across different evidence types.
When direct testing is impossible, look for the schema that explains the most diverse types of observations under a single mechanism.
Validate important self-schemas using at least three independent evidence types: behavioral observation, external feedback, and outcome tracking.
When one piece of evidence contradicts four confirming pieces, treat the divergence as diagnostic of either measurement error or a boundary condition, not as noise to dismiss.
Choose schema reviewers who think differently from you, not those who share your framework, because divergent perspectives surface blind spots that similar perspectives cannot see.
Present schemas for review using three diagnostic questions: What assumption am I missing? What would disconfirm this? What alternative explains the same observations?
Receive schema feedback without defending for at least 24 hours, because immediate response activates argumentation mode rather than evaluation mode.
Document validation results immediately and contemporaneously, before hindsight bias rewrites your memory of what you predicted or expected.
Record five components for each validation: the schema before testing, the test performed, what you predicted, what actually happened, and what it means for the schema.
Write validation records in specific observational language ('she pushed back on three of five points') not interpretive language ('she mostly agreed'), to prevent retroactive interpretation drift.
Review validation logs periodically to identify meta-patterns—systematic tendencies in how your schemas fail—rather than treating each validation as an isolated event.
Document disconfirmations more carefully than confirmations, because confirmations are naturally retained in memory while disconfirming evidence is filtered out.