The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
After making irreversible commitments, schedule reviews with someone uninvolved in the original decision and define success/failure criteria in advance, because escalation of commitment will corrupt your post-decision assessment.
When facing a one-way door decision, actively search for ways to restructure it into a two-way door through trial periods, exit clauses, smaller pilots, or phased rollouts before committing to the heavyweight deliberation process.
For two-way door decisions where reversal costs under one week of effort, set a decision deadline of 24 hours or less regardless of how the decision feels emotionally.
When implementing feature flags, canary deployments, or A/B tests, treat the deployment decision as a two-way door by defining rollback metrics and automated reversal triggers before deployment.
When disagreement persists on a two-way door decision after expressing positions, invoke 'disagree and commit'—explicitly state disagreement, commit to supporting the chosen path, and move forward immediately without seeking consensus.
When hiring or making other search-based decisions from a known pool size, spend the first 37% of candidates or options in pure exploration (reject all, calibrate threshold), then commit to the next candidate that exceeds the best seen in the exploration phase.
For decisions where options number more than 7, either reduce the option set to 5-7 before evaluation or use elimination criteria to filter before detailed comparison, because choice overload degrades both decision quality and satisfaction above this threshold.
Design pre-commitment rules during cold cognitive states (well-rested, calm, not under deadline pressure) to constrain behavior during hot cognitive states (stressed, depleted, emotionally activated), never vice versa.
When recording a decision in a journal, capture six mandatory elements before outcome is known: date/time, one-sentence decision, reasoning chain, expected outcome (falsifiable), confidence percentage, and current mental/physical state.
When reviewing decision journal entries, follow the three-step sequence: (1) re-read original reasoning with outcome hidden, (2) predict outcome based only on original reasoning, (3) uncover actual outcome and compare all three—this sequence defeats hindsight bias.
Across 30+ decision journal entries, calculate your calibration by grouping decisions by stated confidence level (e.g., all 70% predictions) and checking whether that percentage actually occurred—use this ratio to adjust future confidence statements.
When reviewing decision outcomes, evaluate process quality independently from result quality by asking 'given what was knowable at decision time, was the reasoning sound?' rather than 'did it work out?'
Apply the 70% information threshold: if you have 70% of the information you wish you had, decide immediately—waiting for 90%+ almost always costs more than the improved decision quality returns.
Before committing to any purchase decision, explicitly name what else that money could buy in different categories (not just competitor products), because mental retrieval naturally limits alternatives to within-category competitors and misses the highest-value cross-category tradeoffs.
Apply the irreversibility test to every delegation candidate: if the decision can be reversed at low cost within one week, delegate it regardless of its perceived importance; if reversal is expensive or impossible, retain it for your direct judgment.
For every delegated decision, specify three mandatory components in writing: the single accountable owner (one person not a committee), the constraints within which they have full authority, and the explicit conditions that trigger escalation back to you.
Before any analysis begins for a decision, explicitly classify it as speed-dominant (reversible, low cost of wrong, high cost of delay) or accuracy-dominant (irreversible, high cost of wrong, low cost of delay), then let that classification dictate process—fast decisions get 15 minutes and bias toward action, slow decisions get structured analysis.
Evaluate decision quality separately from outcome quality by scoring process and results independently, placing decisions in a 2x2 matrix to distinguish deserved success, bad luck, dumb luck, and deserved failure.
Before selecting a decision framework, run four diagnostic questions in sequence: (1) How reversible? (2) How many competing criteria? (3) What time horizon of consequences? (4) What is the cost of analysis itself?—using the answers to converge on the appropriate framework class within 60 seconds.
In post-decision review, explicitly add the meta-question 'Did I use the right framework for this decision?' and note framework-decision mismatches (comprehensive analysis on trivial reversible choices, satisficing on irreversible high-stakes decisions) to build your personal routing table.
Audit your work week by categorizing each decision as 'routine' (similar decision made before, could use framework) or 'novel' (requires fresh thinking), then for the five highest-frequency routine decisions, draft simple frameworks (default answer, two-option heuristic, or pre-commitment rule) and implement all five within one week.
Retire metrics entirely when they no longer distinguish between gaming behavior and genuine progress, as continued use of a decoupled metric produces wrong information you trust rather than mere absence of information.
When multiple goals compete for the same scarce resource, match the allocation mechanism to dependency structure—use priority queue when importance differs, rotation when all are equal, and time-slicing when multiple need access within the same period.
Before adopting anyone else's recommendation, apply the accountability check: 'Am I willing to own this decision as though it were entirely my own?'—if you would deflect blame to the source upon failure, you have not processed the input as influence.