The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
When multiple children override the same inherited property, restructure the hierarchy rather than accumulating individual overrides, as clustered overrides indicate the parent's assumption is systematically wrong.
When updating a schema, map all downstream dependencies (habits, commitments, tools, relationships, routines) before implementation and migrate high-friction dependencies first.
Assign review cadences to schemas based on their pace layer—weekly to monthly for fashion/commerce layers in complex domains, quarterly for infrastructure layer, annually for governance layer, and only on anomaly for culture/nature layers with high dependency depth.
When a schema has many downstream dependencies, apply slower and more deliberate revision cadences than its pace layer alone suggests, because updating foundational schemas requires cascading updates to all dependent schemas.
Before attempting to change a shared team schema, map what the current schema supports—which decisions it enables, what coordination it simplifies, and what would break if it disappeared—to understand its load-bearing function.
For each important schema, map both its prerequisites (what it depends on) and its dependents (what depends on it), then flag schemas appearing most frequently as dependencies for regular review.
When measurement data shows stable satisfactory performance with no identifiable bottleneck, redirect optimization effort to a different system rather than continuing to optimize the current one.
Apply the cascade test to contradictions by asking 'If I resolved this, what else would have to change?' to distinguish surface contradictions (low dependency count) from deep contradictions (high dependency count).
When discovering that behavior contradicts stated values, investigate the actual reward structure driving behavior rather than increasing willpower or restating values more emphatically.
Design early warning indicators for polarity drift by identifying the characteristic downsides of each pole, then monitor for those downsides to trigger course-correction before crisis.
Before attempting to resolve any persistent organizational tension, apply the problem-vs-polarity test: can new information or analysis make one side permanently win? If no, design oscillation management rather than searching for resolution.
When a designed agent fails to fire consistently after two weeks, diagnose whether the trigger is not salient enough, the condition is too restrictive, or the action requires too much effort, because each failure type requires different corrections.
Design agents only for decisions that score high on frequency (recurring often), stability (same answer each time), and low individual stakes, because these three properties determine whether automation saves resources without introducing unacceptable risk.
When agent sub-behaviors can execute independently without logical dependency, separate them into distinct agents with independent triggers rather than coupling them into sequences, because coupled agents produce cascading failures.
Document every agent in a structured five-component format: (1) Name, (2) Trigger, (3) Conditions, (4) Actions, (5) Success criteria, to enable systematic review and prevent silent degradation.
When discovering that your designed agents conflict with each other, resolve the conflict through documented priority hierarchies rather than case-by-case deliberation, making the resolution rule itself part of your agent system.
Build feedback loops into agent systems through regular review asking whether agents fired, whether they produced intended outcomes, and whether conditions have changed, treating review as essential maintenance not optional improvement.
Schedule quarterly reviews of every default you have installed in your systems and processes, because contexts change and outdated defaults silently steer toward yesterday's goals without conscious detection.
When evaluating whether to optimize an existing system, calculate breakeven time by dividing optimization effort by weekly time savings—if payback exceeds the system's expected remaining lifespan, redirect effort to the actual constraint instead.
For any recurring activity, explicitly define three elements—the specific output being measured, the standard for comparison, and the adjustment rule triggered by deviation—to create a complete minimal feedback loop.
Before attempting to improve any feedback loop, measure the current delay between action and signal in concrete time units (seconds, minutes, hours, days) rather than accepting vague assessments, because unmeasured delays appear shorter than they actually are through habituation.
Find a faster correlated signal that approximates delayed feedback rather than waiting for the original signal, accepting that speed compensates for increased noise in the approximation.
Strengthen a reinforcing loop you want to amplify by reducing friction at any node, increasing gain at a single node, or shortening cycle time, implementing one intervention per loop rather than attempting simultaneous multi-variable changes.
Before attempting to improve a feedback loop component, verify it is actually the constraint by measuring whether improvements there would increase total system throughput, as optimizing non-constraints produces local gains without system-level improvement.