The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Start automation at the lowest-tech level that solves the problem (checklists, templates, calendar events) before considering platforms or scripts, because simplicity reduces maintenance burden.
Rank operational habits into three tiers (minimum viable, performance-improving, optimizations) so you know which to preserve under moderate disruption (tiers 1-2) and severe disruption (tier 1 only).
Restart operations sequentially after disruption by adding tier 1 first, confirming stability, then adding tier 2, then tier 3, rather than attempting simultaneous restart of all habits.
Add a single recurring check at the end of each weekly review that compares your operational handbook to your actual operations and updates discrepancies immediately.
Run operational systems at 70-85% of measured capacity to maintain adaptive buffer, because systems at full utilization cannot absorb environmental change without breaking.
Calculate maintenance budget for entire operational system (30 minutes to 2 hours weekly) as binding constraint; reject additional components when aggregate maintenance would exceed sustainable capacity.
After any operational failure, write a blameless post-mortem using five questions: what happened (factual description), what was the timeline of contributing events, what were the systemic factors, what are the action items (specific system changes), and what would have prevented this.
Run one complete PDCA improvement iteration per week on a single operational system: state a hypothesis predicting how a specific change will improve a specific measurable outcome by an estimated amount, implement for one cycle, measure the result, then adopt, adjust, or abandon based on whether the prediction held.
When operational maintenance is scheduled at a time with no buffer, no reminder, and no fallback slot, add redundancy through a primary slot, backup slot, and automated reminder rather than increasing willpower.
For every operational task you currently frame as 'busywork,' write one sentence describing what would break if you stopped doing it for a month—if nothing breaks, eliminate it; if something breaks, relabel it as infrastructure.
When a behavior, reaction, or outcome recurs three or more times across a 30-day period, classify it as a pattern candidate requiring structural analysis rather than treating it as coincidence.
When a keystone habit produces cascading benefits, protect its trigger conditions and structural enablers rather than optimizing the habit's internal execution.
Allocate pattern-change effort to second-order interventions (changing how patterns form) over first-order fixes (changing individual patterns) when three or more first-order patterns share formation or dissolution characteristics.
For each important outcome you care about, identify one lagging indicator (the outcome) and pair it with 1-2 leading indicators (upstream behaviors that predict it), tracking both to validate the predictive relationship.
When a leading indicator improves but its paired lagging outcome does not follow within the expected timeframe, treat the leading indicator as broken (gamed, confounded, or non-predictive) and replace it.
When encountering a frustrating recurring behavior in your organization, map the actual incentive structure (what gets rewarded, punished, and measured) before attributing the behavior to individual character, because most problematic behaviors are rational responses to system design.
Before removing any inherited system, process, or organizational structure, document why it was originally created and what problem it solved—if this context cannot be reconstructed, you lack sufficient information to safely remove it.
After drawing a mental model, audit it for missing feedback loops by tracing whether any effects circle back to influence their own causes, because circular causation governs most complex systems but is invisible to linear thinking.
Categorize each failure as preventable (process deviation), complex (novel factor interaction), or intelligent (frontier experiment) before analysis, because different failure types require different questions.
Document system operations in five components—capture rules, processing workflow, retrieval method, review protocol, and evolution history—because each component addresses a distinct failure mode in knowledge system sustainability.
During weekly reviews, cross-reference externalized domains to detect contradictions—compare stated priorities against time allocation, goals against commitments, assumptions against failure analyses—because isolated review of each domain misses the conflicts that degrade decision quality.
Intervene in causal chains at the deepest link where you have both ability and authority to act, as fixes at deep links prevent the entire chain from firing while surface fixes only address symptoms.
When validating redundancy in human systems, test backup paths during normal operations rather than waiting for primary failure, as untested redundancy often fails precisely when needed due to skill atrophy or outdated procedures.
When two systems appear redundant but share a common power source, network segment, authentication service, or human operator, treat them as a single point of failure with cosmetic duplication rather than genuine redundancy.