The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
When using AI during high stress, prompt with 'I am stressed and may be experiencing tunnel vision—what am I likely not seeing?' rather than 'prove my interpretation is right.'
When making frequency or probability estimates, pause and ask: 'Am I estimating actual frequency, or how easily I can picture this?' then look up the base rate before deciding.
Before any consequential decision, populate two mandatory fields: the recent event driving current feeling, and the base rate/historical trend across the full relevant time window.
When a vivid individual case makes you feel certain about probability, explicitly ask 'what is the actual frequency of this event in the relevant population?' before forming any judgment.
Frame pre-mortem prompts as 'It is [future date]. This has failed completely. Write why.' rather than 'What could go wrong?' to shift cognition from speculation to explanation.
When searching for disconfirming evidence, if your search could not have actually changed your mind, you performed a ritual not genuine disconfirmation—redesign the search until failure is possible.
When new evidence arrives, classify it by diagnostic value before updating—ask whether you'd see this evidence regardless of belief truth versus only if belief were true/false.
After accumulating 15-20 judgments in the same domain, analyze whether errors cluster directionally (bias requiring correction factor) or scatter randomly (noise requiring aggregation).
Conduct a two-week bias journal recording significant judgments with confidence levels, then categorize errors by direction and type to build your personal bias profile.
For each identified bias in your profile, write a specific pre-correction question or procedure to execute before acting on judgments in that domain.
After four weeks of belief tracking, examine whether beliefs barely moved despite evidence (conservatism) or swung dramatically on single data points (base rate neglect) to identify domain-specific updating patterns.
Before interpreting any piece of information—a message, a metric, a statement, a data point—run a five-question context scan: What environment am I in? What role am I occupying? What just happened that might color my perception? What are the goals (mine and others')? What assumptions am I importing from a different context?
Record decision context at the moment of commitment using five elements: (1) decision statement, (2) forces/constraints/emotions active at choice point, (3) expected consequences with timeline, (4) confidence level 1-10, (5) review trigger date—before hindsight bias can rewrite your reasoning.
When reviewing a past decision, read the original context record before evaluating the outcome, because evaluating outcome first allows hindsight bias to contaminate your assessment of whether the reasoning was sound.
Set the threshold for decision context documentation at any choice where you deliberated between options for more than sixty seconds, because if you considered alternatives consciously, the reasoning is worth preserving against memory reconstruction.
When importing best practices or frameworks from another era or scale, explicitly verify that the contextual conditions (organizational size, technological infrastructure, market maturity) that made the practice optimal still hold before adopting it.
For each high-stakes word in decisions or commitments (quality, ownership, alignment, done, strategy), require independent operational definitions from each stakeholder before proceeding, then compare and reconcile the definitions explicitly.
When an AI system makes consequential decisions about people (hiring, performance evaluation, resource allocation), audit what organizational context and metrics trained the system before evaluating algorithm quality, because AI inherits and amplifies the biases of the measurement system.
Before committing to a private written position for any group decision, externalize your reasoning and conclusion before the group discussion begins, then compare it to your post-discussion position to detect social influence effects.
Before removing any inherited system, process, or organizational structure, document why it was originally created and what problem it solved—if this context cannot be reconstructed, you lack sufficient information to safely remove it.
Before evaluating any past decision, reconstruct the information environment that existed at decision time using contemporaneous records rather than memory, then evaluate the decision against that environment only.
Document decisions using five fields: what you decided, alternatives considered, information available and missing, optimization criteria, and conditions for revisiting—rather than recording only conclusions.
Before finalizing significant decision records, have an AI argue against your reasoning and append the strongest objection to your record, preserving the full deliberation rather than only your preferred conclusion.
During weekly reviews, cross-reference externalized domains to detect contradictions—compare stated priorities against time allocation, goals against commitments, assumptions against failure analyses—because isolated review of each domain misses the conflicts that degrade decision quality.