The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Do not expect pattern recognition alone to eliminate the pattern—track the ratio of pattern-following to pattern-breaking instances over weeks rather than demanding immediate control, because automaticity requires repeated override practice to weaken.
Allocate pattern-change effort to second-order interventions (changing how patterns form) over first-order fixes (changing individual patterns) when three or more first-order patterns share formation or dissolution characteristics.
Audit your daily pattern portfolio by labeling each recurring behavior as appreciating (+), depreciating (-), or neutral (=), then make exactly one trade per month (reduce one depreciating pattern by 50%, install one appreciating pattern at minimal scale).
When updating a schema, map all downstream dependencies (habits, commitments, tools, relationships, routines) before implementation and migrate high-friction dependencies first.
When attempting to shift a shared team schema, create low-cost experiments where the team uses the new schema on one real decision, rather than presenting the new framework in slides or documents.
When discovering that behavior contradicts stated values, investigate the actual reward structure driving behavior rather than increasing willpower or restating values more emphatically.
For each identified values-behavior gap, ask what competing value the behavior actually reveals and what would need to change in environment, habits, or defaults for alignment.
When a designed agent fails to fire consistently after two weeks, diagnose whether the trigger is not salient enough, the condition is too restrictive, or the action requires too much effort, because each failure type requires different corrections.
Track agent displacement by measuring the percentage of times your designed agent fires instead of the default, not by whether you execute perfectly every time, because replacement is gradual and competes against thousands of prior reinforcements.
When reverse-engineering a default agent, write down all three components (trigger, condition, action) even if the condition is 'always' or appears absent, because making the implicit condition explicit reveals where the default fires indiscriminately.
Define agent triggers as observable external events or measurable internal states rather than subjective feelings or abstract conditions, because vague triggers cannot be recognized reliably when they occur.
Set agent review cadences at 7 days for new habits, 30 days for established behaviors, and immediately after any major context change, because review timing must match the actual rate of drift in each agent type.
Define agent success as a measurable outcome with a minimum acceptable firing rate threshold (typically 80% over one week for new agents) rather than subjective satisfaction, because subjective assessment systematically inflates reliability perception.
Externalize critical reliability agents (medication, safety checks, high-stakes commitments) to tools or environments rather than trusting biological memory, because internal agents degrade precisely when stakes are highest—under stress and cognitive load.
Use hourly momentary sampling over 48+ hours rather than end-of-day recall when auditing behavioral agents, because retrospective memory systematically overweights salient successes and underweights invisible failures.
Classify each observed behavior as designed (you can identify the installation decision) or default (no identifiable decision point) during audits, treating the classification question itself as a detection mechanism for unexamined automation.
When an agent fires below 80% of expected opportunities over 30 days, reduce it to the simplest executable version before adding any complexity, because unreliable agents cannot be improved through sophistication.
Design minimal viable agents to execute in under two minutes with zero preparation before attempting multi-step sequences, because automaticity requires low activation energy and activation energy must be minimized before sophistication is added.
When agent sub-behaviors can execute independently without logical dependency, separate them into distinct agents with independent triggers rather than coupling them into sequences, because coupled agents produce cascading failures.
Document every agent in a structured five-component format: (1) Name, (2) Trigger, (3) Conditions, (4) Actions, (5) Success criteria, to enable systematic review and prevent silent degradation.
Write agent action steps as specific ordered procedures rather than aspirations or principles, requiring sufficient granularity that someone unfamiliar could execute them without clarification.
Maintain a failure log where every agent misfire is recorded with date, agent name, what happened, and hypothesis about why, then review weekly to extract patterns.
When an agent fails, diagnose which component broke—trigger (never activated), condition (activated but context wasn't right), or action (executed but too vague/complex)—before attempting any redesign.
Fix only one component (trigger, condition, or action) per agent iteration rather than redesigning multiple components simultaneously, to maintain causal attribution of what changes produced which effects.