Document every agent with five components: Name, Trigger, Conditions, Actions, Success Criteria — undocumented agents degrade silently
Document every agent in a structured five-component format: (1) Name, (2) Trigger, (3) Conditions, (4) Actions, (5) Success criteria, to enable systematic review and prevent silent degradation.
Why This Is a Rule
An undocumented behavioral agent exists only in the builder's memory — and memory is unreliable, especially about the design details of automatic behaviors. After a few weeks, you forget the exact trigger conditions, the specific success criteria, or why you chose those particular action steps. The agent becomes a vague intention rather than a testable system, and when it starts degrading, you can't diagnose which component failed because you don't have a record of what the components were supposed to be.
The five-component format — Name, Trigger, Conditions, Actions, Success Criteria — creates a complete, reviewable specification for each agent. The Name makes it referenceable in reviews. The Trigger specifies what initiates it (Agent triggers must be observable or measurable — vague triggers like "when I feel ready" never fire reliably). The Conditions specify when it should and shouldn't fire (Write all three components of default agents — even 'condition: always' reveals indiscriminate firing patterns). The Actions specify exactly what to do (Write agent actions as procedures a stranger could follow — aspirations and principles are not executable steps). The Success Criteria specify how you know it's working (Define agent success as 80%+ firing rate, not subjective satisfaction — felt reliability systematically inflates actual performance). Together, these five fields make the agent debuggable, reviewable, and shareable.
This is the behavioral equivalent of documenting code: the system works without documentation, but without it, maintenance is impossible and silent degradation goes undetected.
When This Fires
- When installing any new behavioral agent — document before or during installation, not after
- During agent portfolio reviews when you need to assess what's active and what's degraded
- When diagnosing a failing agent (Diagnose before redesigning — identify whether trigger, condition, or action broke before changing anything) — you need the documentation to know what was supposed to happen
- When sharing behavioral systems with others (coaching, teaching, team practices)
Common Failure Mode
Documenting agents mentally: "I know what the trigger is." After a month, you don't — you remember a vague version that's drifted from the original design. The documentation isn't for your current self; it's for your future self who has forgotten the design details, and for your debugging self who needs the specification to diagnose failures.
The Protocol
(1) For every agent you install, create a written record with five fields: Name: a descriptive label (e.g., "Morning Writing Agent"). Trigger: the observable event that initiates it (Agent triggers must be observable or measurable — vague triggers like "when I feel ready" never fire reliably). Conditions: when it fires vs. doesn't (including "always" if no conditions — Write all three components of default agents — even 'condition: always' reveals indiscriminate firing patterns). Actions: specific, ordered steps (Write agent actions as procedures a stranger could follow — aspirations and principles are not executable steps). Success criteria: measurable outcome and minimum firing rate (Define agent success as 80%+ firing rate, not subjective satisfaction — felt reliability systematically inflates actual performance). (2) Store in an accessible location — notebook, app, or knowledge system. (3) Reference during reviews: is the documented specification still matching actual behavior? If not, either the agent has drifted or the documentation needs updating. (4) Update documentation when you modify any component — stale documentation is worse than none because it generates false confidence.