The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Build error correction mechanisms into systems from the start rather than treating errors as anomalies to be prevented through discipline.
Use errors as information signals about gaps between your model and reality rather than as evidence of personal inadequacy.
Build fast automatic detection systems for execution errors and slower deliberate detection systems for judgment errors.
Monitor your error detection system's own performance to catch systematic blind spots in what you notice.
Apply execution fixes (checklists, automation, environmental redesign) to execution errors, epistemic fixes (new information, expert consultation) to knowledge errors, and calibrational fixes (prediction tracking, external review) to judgment errors.
Test your riskiest assumptions first with the cheapest possible tests before committing significant resources to execution.
Define specific conditions for stopping or pivoting in advance to counteract escalation of commitment once resources are invested.
Build validation mechanisms that test your work against external reality independent of your internal experience of progress.
Share work-in-progress early to test foundational assumptions before building on them, accepting the discomfort of exposing incomplete work.
Define an explicit acceptable error rate for each system you operate rather than implicitly requiring perfection.
Track error accumulation against your defined budget and trigger investigation when the budget is exhausted, not when individual errors occur.
Allocate correction resources to errors that exhaust budgets rather than distributing effort uniformly across all deviations.
When the same error recurs, the error is not the problem—the system that generates the error is the problem.
Trace recurring errors to their structural invariant—the constant factor present across all instances—rather than the circumstantial variables that differ between instances.
Ask 'why' iteratively until you reach a cause that is both structurally changeable and preventive of the entire causal chain above it.
When a 'why' produces multiple answers, follow each causal branch separately to reveal the full structure of multi-causal problems.
If your root cause analysis terminates at a person rather than a process, you have not reached the root cause.
Design checklists to catch the steps that competent practitioners are most likely to skip under real operational conditions, not every step in the process.
Position checklists at boundaries between preparation and execution, where the cost of catching errors is lowest and the cost of missing them is highest.
Frame checklist items as conditions to verify rather than actions to recall, forcing observation instead of memory retrieval.
Build deliberate friction into verification procedures to prevent them from degrading into automatic, unthinking rituals.
Conduct post-action reviews within 48 hours while memory of decision points and expectations remains accurate, before reconstructive bias distorts the data.
Separate what you expected from what actually happened by externalizing predictions before outcomes occur, creating a fixed reference that hindsight bias cannot rewrite.
In post-action analysis, terminate causal reasoning at the level of process and structure, not at the level of personal adequacy or character.