The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Inspect agents during their stable operational phase specifically because they appear to be working, not after they fail — early detection of drift is cheaper than repair after collapse.
Separate performance maintenance (is the agent doing what it's designed to do) from alignment maintenance (should the agent still exist) and run them at different frequencies.
During maintenance reviews, strip accumulated cruft by removing steps, exceptions, and scope expansions that have made the agent more complex than its essential function requires.
When modifying an agent, evolution is preferable when the core architecture remains sound and the gap between current and needed state is small; replacement is preferable when core assumptions have shifted or the architecture cannot represent new requirements.
Batch similar shallow tasks together to minimize the cognitive overhead of task-type switching and reduce attention residue accumulation.
Test for sunk cost contamination in agent retirement decisions by asking whether you would design anything resembling the current agent if starting fresh today with the same need.
Expect an extinction burst when retiring an agent—a temporary increase in the urge to perform the retired behavior that occurs before decline—and do not interpret this as evidence the retirement was wrong.
When building a successor agent, inherit the predecessor's accumulated knowledge by documenting edge cases, implicit dependencies, and temporal adjustments rather than starting from a blank slate.
When designing cognitive agents, examine the full pattern across multiple past attempts rather than treating each as an isolated failure, as recurring design assumptions reveal systematic blind spots that single-instance analysis cannot detect.
Design cognitive systems for your actual operating environment including its variability and constraints, not for idealized conditions you wish you had, as agents designed for optimal conditions fail when reality reasserts itself.
Build cognitive agents with multiple independent trigger mechanisms and resources rather than single points of dependency, as redundancy enables the system to survive environmental disruption that would otherwise cause cascading failure.
Evaluate cognitive agents by how they contribute to the portfolio's overall properties (diversification, correlation, coverage) rather than by individual performance alone, as portfolio-level risk emerges from relationships between components.
Maintain most cognitive agents as stable, low-maintenance foundations while reserving a small portion for high-variance experimental practices, as this barbell structure provides both reliability and the capacity for transformative breakthroughs.
Diversify cognitive agents across life domains rather than concentrating them in one area, as domain diversity ensures system-level functioning continues when any single domain experiences disruption.
Identify and eliminate shared dependencies across multiple agents during portfolio review, as correlated failure points create systemic fragility that individual agent optimization cannot address.
Make context switching costs visible through deliberate tracking before attempting to reduce them, because the tax is invisible in subjective experience and most people dramatically underestimate how frequently they switch.
Assess agents using frequency-times-behavioral-delta (how often they fire × how much they change your actions) rather than perceived importance, as most perceived-important agents prove to be high-frequency noise generators upon measurement.
Retire cognitive agents based on current value production rather than sunk investment, as maintaining low-impact agents due to past effort compounds resource drain through the endowment effect and status quo bias.
Regular audits of recurring cognitive processes identify agents that persist through inertia rather than current utility, enabling resource reallocation before accumulation costs compound.
Articulating the specific outcome an agent produces and when you last used that outcome distinguishes active value-generators from legacy processes running on autopilot.
Estimate agent costs in three currencies—time, attention, and opportunity—because the attention cost typically exceeds the time cost and determines actual system load.
Documentation must be updated in the same commit or editing session as behavioral changes, not as a separate follow-up task, because manual synchronization steps that can be skipped will be skipped.
Documentation should disclose constraints and failure modes, not just capabilities, because users discover limitations through failure when documentation omits them.
Coordination costs between agents grow quadratically (n(n-1)/2) while value grows linearly, establishing a natural upper bound on portfolio size before overhead exceeds benefit.