The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Humans exhibit egocentric anchoring when modeling others' perspectives, beginning from their own viewpoint and adjusting insufficiently to account for differing knowledge, beliefs, or perceptual access.
The human brain automatically generates and perceives patterns, relationships, and regularities in sensory input prior to and often independent of conscious verification, subject to systematic biases such as confirmation bias that preferentially encode pattern-consistent information while filtering contradictory evidence.
When you feel you have 'thoroughly considered' a decision, treat that feeling as a warning signal requiring additional externalized inventory, because WYSIATI (What You See Is All There Is) creates confidence from narrative coherence rather than completeness.
For significant decisions, add a fifth field documenting pre-mortem risks (2-3 specific ways the decision could fail) to preserve concerns that hindsight bias will erase.
Before acting on snap judgments during debugging or incident response, read system logs and dashboards for five minutes without proposing theories to prevent hypothesis anchoring from corrupting observation.
When debugging with strong initial hypotheses about root cause, deliberately search logs for evidence that would falsify the hypothesis rather than confirm it, to counteract confirmation bias in data collection.
Before committing to a hypothesis about a bug's cause, write one sentence completing 'What would I expect to see if I were wrong?' then specifically search for that evidence before continuing the investigation.
When confidence in a technical conclusion exceeds 8/10, treat that high confidence as a trigger to increase scrutiny and deliberately search for disconfirming evidence rather than reducing verification effort.
When you encounter the same judgment arising across three or more different contexts, treat it as a structural cognitive habit requiring explicit examination rather than a series of independent assessments, because cross-context repetition indicates the judgment is executing from pattern-matching rather than situation-specific analysis.
For habitual judgments that fire too quickly to intercept before they execute, implement a marking protocol where you log each occurrence with 'HJ' notation immediately after noticing it, because marking converts the judgment from invisible background process to observable object without requiring the neurologically impossible task of preventing automatic activation.
When noticing a habitual judgment about a person or group, apply the substitution test by asking whether you would make the same evaluation if a different person or group were in the identical situation—if the answer is uncertain or negative, you have detected a judgment running on identity rather than behavior.
Before using AI for pattern analysis on observational data, ensure your input consists of descriptive observations rather than evaluative conclusions by applying the camera test to each input statement, because AI analyzing your conclusions produces confirmation of your biases rather than structural insights.
Record each occurrence of a suspected pattern with date and context in a dedicated log before drawing conclusions, because memory-based frequency assessment systematically overestimates recurrence through the frequency illusion.
When expert pattern recognition locks onto a familiar solution, force consideration of alternatives by asking 'I see X—but what if it is not X? What would I expect if it were Y?' before committing.
When angry, deliberately seek disconfirming evidence and independent risk assessments, as anger systematically inflates certainty, deflates risk perception, and increases risk-seeking behavior.
When using AI during high stress, prompt with 'I am stressed and may be experiencing tunnel vision—what am I likely not seeing?' rather than 'prove my interpretation is right.'
When making frequency or probability estimates, pause and ask: 'Am I estimating actual frequency, or how easily I can picture this?' then look up the base rate before deciding.
Before any consequential decision, populate two mandatory fields: the recent event driving current feeling, and the base rate/historical trend across the full relevant time window.
When a vivid individual case makes you feel certain about probability, explicitly ask 'what is the actual frequency of this event in the relevant population?' before forming any judgment.
Frame pre-mortem prompts as 'It is [future date]. This has failed completely. Write why.' rather than 'What could go wrong?' to shift cognition from speculation to explanation.
After accumulating 15-20 judgments in the same domain, analyze whether errors cluster directionally (bias requiring correction factor) or scatter randomly (noise requiring aggregation).
For each identified bias in your profile, write a specific pre-correction question or procedure to execute before acting on judgments in that domain.
After identifying that you are systematically overconfident on timelines by X%, multiply your initial timeline estimates by (1 + X/100) before stating them publicly.
Record decision context at the moment of commitment using five elements: (1) decision statement, (2) forces/constraints/emotions active at choice point, (3) expected consequences with timeline, (4) confidence level 1-10, (5) review trigger date—before hindsight bias can rewrite your reasoning.