The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
When you feel you have 'thoroughly considered' a decision, treat that feeling as a warning signal requiring additional externalized inventory, because WYSIATI (What You See Is All There Is) creates confidence from narrative coherence rather than completeness.
When someone challenges one part of your compound plan and you defend the whole thing, treat this as a diagnostic signal that you're still operating on fused ideas rather than independent assumptions.
When presenting compound statements to AI systems, explicitly ask for assumption enumeration rather than direct answers, then critically verify the decomposition's completeness since the AI may introduce its own hidden assumptions.
Store evidence with full methodological metadata (sample size, control conditions, limitations) as independent nodes rather than as decorative citations on claims, to enable proportionality assessment and multi-argument reuse.
Before forcing resolution of contradictory observations or beliefs, accumulate multiple instances in a contradiction log to enable pattern detection impossible from individual contradictions.
For every high-stakes term in your reasoning (quality, success, productive, fair), write an operational definition specifying observable conditions that must be true for the term to apply, then store that definition as a canonical reference atom in your knowledge system.
When two people or two parts of your own thinking use the same term with persistent conflict, pause the debate and conduct a definition audit: have each party write their operational definition independently, then compare—if definitions diverge, the conflict is definitional not factual and should be resolved at the definition level.
When two schemas of the same situation diverge between people, treat the divergence itself as information about complexity the territory contains that neither schema fully captured.
When splitting a compound note during refactoring, make explicit decisions about which idea is the core claim, what was supporting evidence versus separate argument, and how the pieces causally relate before completing the split.
When debugging with strong initial hypotheses about root cause, deliberately search logs for evidence that would falsify the hypothesis rather than confirm it, to counteract confirmation bias in data collection.
Before committing to a hypothesis about a bug's cause, write one sentence completing 'What would I expect to see if I were wrong?' then specifically search for that evidence before continuing the investigation.
When confidence in a technical conclusion exceeds 8/10, treat that high confidence as a trigger to increase scrutiny and deliberately search for disconfirming evidence rather than reducing verification effort.
Before any recurring meeting or code review, spend 5 minutes writing down what topics never get discussed, what people never speak, and what failure modes are never mentioned—listing at least five absences.
Before sending difficult emails or presenting challenging conclusions, run your draft through fact-story filtering by asking which statements would survive if you had to prove them with timestamps, screenshots, or measurements, because this prevents narrative from masquerading as evidence.
When you encounter the same judgment arising across three or more different contexts, treat it as a structural cognitive habit requiring explicit examination rather than a series of independent assessments, because cross-context repetition indicates the judgment is executing from pattern-matching rather than situation-specific analysis.
Validate cross-domain pattern candidates by verifying that the relational structure (not surface similarity) matches across domains—two patterns share structure when the causal relationships between elements are preserved even when the elements themselves differ completely.
Before optimizing around a perceived positive pattern, verify through deliberate removal tests whether the pattern persists when suspected causal factors are absent.
Before building optimization systems around a personal correlation, test whether the correlation survives when you control for potential confounding variables through deliberate experimental variation.
When a pattern appears to reverse across subgroups in your data, disaggregate by relevant context variables (sleep, stress, social setting) before drawing conclusions from the aggregate pattern.
When identifying meta-patterns, require each second-order claim to ground in at least three documented first-order pattern instances to distinguish genuine meta-patterns from intellectual speculation.
Before concluding a pattern is meaningful, verify it survives three independent filters: sample size check (occurrences vs. opportunities), base rate comparison (frequency vs. background rate), and alternative explanation generation (minimum two alternatives).
Maintain two separate lists—'Pattern Candidates' and 'Confirmed Patterns'—promoting candidates only after they survive three independent observations, one alternative-explanation check, and one successful prediction.
When information triggers strong emotion, restate it with all emotionally loaded language stripped before evaluating whether the neutral version warrants the reaction the framed version produced.
Before claiming to understand a complex topic, generate at least three concrete examples from different domains that instantiate the concept, and if you cannot produce varied examples, treat this as evidence that you have acquired vocabulary without understanding.