When an agent fires below 80% after 30 days, simplify before sophisticating — unreliable agents need reduction, not enhancement
When an agent fires below 80% of expected opportunities over 30 days, reduce it to the simplest executable version before adding any complexity, because unreliable agents cannot be improved through sophistication.
Why This Is a Rule
When a behavioral agent underperforms, the intuitive response is to add sophistication: better tracking, more detailed conditions, accountability partners, rewards, consequences. But an agent that fires below 80% over 30 days has a fundamental reliability problem that complexity can't fix — it can only mask. Adding a tracking app to a habit that doesn't fire is optimizing the wrong layer: the behavior needs to become simpler and easier, not more monitored.
BJ Fogg's Tiny Habits research demonstrates this conclusively: behavioral reliability comes from reducing activation energy to near zero, not from adding motivational scaffolding. If your "write for 30 minutes every morning" agent fires at 40%, the solution isn't a writing tracker with streak counts and accountability texts. The solution is "open the writing file and write one sentence." Once the one-sentence version fires at 90%+, you can graduate to more.
The 30-day window matters: it's long enough to distinguish a genuinely underperforming agent from one having a bad week, and it's the point where "give it more time" stops being a valid response. If an agent hasn't achieved 80% reliability in a month, its current design is the problem.
When This Fires
- When a behavioral agent has been active for 30+ days and still fires below 80%
- When you're tempted to add tracking tools, accountability mechanisms, or rewards to a failing habit
- During monthly agent reviews (Review new agents weekly, established ones monthly, and all agents after major context changes) when assessing established agent performance
- When sophistication keeps increasing but reliability doesn't follow
Common Failure Mode
Adding complexity to fix reliability: "My meditation habit isn't sticking, so I'll buy a meditation cushion, join an accountability group, and track it in three apps." The activation energy just went up — now you need the cushion, the group check-in, AND the app logging before the behavior counts as "done." The real fix: "Sit. One breath." Reduce to the absolute minimum, establish reliability, then expand.
The Protocol
(1) When an agent fires below 80% over 30 days, stop adding complexity. (2) Strip the agent to its minimum viable version: what is the absolute smallest action that still counts? "Write one sentence." "Do one pushup." "Open the meditation app." (3) Keep the trigger and condition unchanged — they were designed for the full agent and still apply. (4) Run the minimal version for two weeks. If it hits 80%+ → the reliability foundation is established. (5) Gradually expand: one sentence → one paragraph → five minutes. Each expansion step must maintain 80%+ before the next. (6) If even the minimal version fails → the issue is trigger or condition design (Diagnose failing behavioral agents by component — trigger salience, condition scope, or action effort each require different fixes), not action complexity.