The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Delegate tasks at 70% of your quality level when adequate work done reliably by others creates more system value than excellent work that depends on your availability.
Explicitly name the level of autonomy you are granting in each delegation to prevent mismatches between intended and assumed authority.
Match delegation level to the combination of task reversibility, delegate's task-specific competence, and the trajectory of trust rather than using a fixed level per person.
Practice delegation sub-skills deliberately—specification clarity, calibration accuracy, feedback delivery, and discomfort tolerance—through structured repetition with clear goals rather than merely accumulating delegation experience.
Design distributed regulatory systems with requisite variety matching the complexity of what they regulate rather than attempting personal control of all decisions.
Control outcomes by designing the system within which decisions are made—boundaries, feedback loops, standards, escalation criteria—rather than controlling individual decisions directly.
Invest the initial cost of clear specification, capable delegate selection, and verification loops to create delegation that produces multiplicative rather than additive returns.
Delegate the building of systems that handle recurring task classes rather than delegating individual task instances to create compounding capacity.
Convert captured surprises into precisely-formed open questions that activate as persistent attentional filters, directing future information-gathering toward resolving the revealed model gap.
Before delegating a task, determine whether the task should exist at all to avoid optimizing execution of low-value work.
Route work to the appropriate agent—human, system, tool, or environment—based on who can handle it at equivalent or better quality rather than defaulting to personal execution.
Invest reclaimed time from delegation into higher-leverage work—system design, strategic thinking, relationship building—rather than filling it with more execution tasks to achieve compound returns.
Design systems that route incoming work to appropriate agents without your direct involvement for each decision to achieve scalable delegation.
Instrument systems with the minimum number of metrics that change behavior when they move, because observation itself consumes the cognitive resources needed to act on what you observe.
Define success operationally as specific measurement procedures rather than abstract qualities, because unmeasurable definitions prevent detection of the gap between felt understanding and actual performance.
Pair quantitative metrics with qualitative judgment to prevent surrogation where the measure psychologically replaces the construct it was designed to represent.
Match monitoring frequency to the rate at which meaningful change occurs in each system, sampling at least twice as fast as the fastest change you need to detect to avoid aliasing real patterns.
Use event-driven monitoring for acute failures that require immediate response and scheduled reviews for trend analysis, because neither alone provides sufficient signal across different failure timescales.
Consolidate all agent status information onto a single surface visible without navigation, because distributed monitoring increases the cognitive cost of checking below the threshold where checking actually occurs.
Evaluate decisions by process quality rather than outcome, using documented reasoning and predictions to separate signal (decision skill) from noise (outcome luck).
Include temporal trend information alongside point-in-time status, because current state on a declining trend is more alarming than degraded state on a recovery trend.
Establish an error budget that defines acceptable unreliability as a resource to spend on experimentation rather than a failure to eliminate, because pursuing perfect reliability prevents the risk-taking required for growth.
Calculate mean time between failures (MTBF) and mean time to recovery (MTTR) separately, because agents with the same failure rate can have dramatically different impact depending on recovery speed.
Monitor multiple dimensions of effectiveness simultaneously to prevent metric gaming — single metrics become corrupted when they become targets.