The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Define agent success as a measurable outcome with a minimum acceptable firing rate threshold (typically 80% over one week for new agents) rather than subjective satisfaction, because subjective assessment systematically inflates reliability perception.
When an agent fires below 80% of expected opportunities over 30 days, reduce it to the simplest executable version before adding any complexity, because unreliable agents cannot be improved through sophistication.
Design minimal viable agents to execute in under two minutes with zero preparation before attempting multi-step sequences, because automaticity requires low activation energy and activation energy must be minimized before sophistication is added.
When agent sub-behaviors can execute independently without logical dependency, separate them into distinct agents with independent triggers rather than coupling them into sequences, because coupled agents produce cascading failures.
Document every agent in a structured five-component format: (1) Name, (2) Trigger, (3) Conditions, (4) Actions, (5) Success criteria, to enable systematic review and prevent silent degradation.
Write agent action steps as specific ordered procedures rather than aspirations or principles, requiring sufficient granularity that someone unfamiliar could execute them without clarification.
Maintain a failure log where every agent misfire is recorded with date, agent name, what happened, and hypothesis about why, then review weekly to extract patterns.
When an agent fails, diagnose which component broke—trigger (never activated), condition (activated but context wasn't right), or action (executed but too vague/complex)—before attempting any redesign.
Fix only one component (trigger, condition, or action) per agent iteration rather than redesigning multiple components simultaneously, to maintain causal attribution of what changes produced which effects.
When discovering that your designed agents conflict with each other, resolve the conflict through documented priority hierarchies rather than case-by-case deliberation, making the resolution rule itself part of your agent system.
Design triggers using the camera test: if a video camera could not detect the exact moment the trigger fires, the trigger is too vague to fire reliably and must be replaced with an observable event.
Place environmental trigger objects at eye level or in the direct path of existing routines rather than in convenient storage locations, maximizing visibility over proximity.
Protect the first five consecutive executions of any new time-based trigger as non-negotiable, treating early repetitions as infrastructure investment that determines whether the trigger becomes automatic or dies silently.
Stack temporal triggers with spatial and state anchors (specific time plus specific location plus specific preceding action) to create redundant activation pathways that survive if any single trigger component fails.
Remove or relocate objects that afford unwanted behaviors before adding objects that afford desired behaviors, because elimination removes temptation entirely while addition only competes for attention.
Start behavioral triggers with a more conservative threshold than feels right, aiming for 3-5 activations per day rather than 30, to build trust through relevance before expanding sensitivity.
Design social accountability around process questions ('Did you write for 30 minutes?') rather than outcome questions ('Did you finish the chapter?') because process accountability triggers action while outcome accountability triggers anxiety.
Make social accountability reporting require near-zero effort (single emoji, shared spreadsheet checkbox, photo) rather than written explanations, because reporting friction kills trigger reliability.
Verify habit automaticity by checking whether the behavior fires from context cues with minimal conscious effort rather than checking execution frequency, because consistency maintained through willpower is not delegation—true automaticity means the cue triggers the routine without deliberation.
Protect the habit formation transfer period (first 2-4 weeks) with environmental supports that make cues more salient and routines easier to initiate, treating these supports as temporary scaffolding to be removed once the delegation completes rather than permanent infrastructure.
Limit active implementation intentions to 1-3 simultaneously for novel behaviors, adding additional if-then plans only after existing ones have compiled into automatic execution, because cue-accessibility mechanisms have finite attentional capacity.