The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Expect an extinction burst when retiring an agent—a temporary increase in the urge to perform the retired behavior that occurs before decline—and do not interpret this as evidence the retirement was wrong.
When building a successor agent, inherit the predecessor's accumulated knowledge by documenting edge cases, implicit dependencies, and temporal adjustments rather than starting from a blank slate.
When designing cognitive agents, examine the full pattern across multiple past attempts rather than treating each as an isolated failure, as recurring design assumptions reveal systematic blind spots that single-instance analysis cannot detect.
Design cognitive systems for your actual operating environment including its variability and constraints, not for idealized conditions you wish you had, as agents designed for optimal conditions fail when reality reasserts itself.
Build cognitive agents with multiple independent trigger mechanisms and resources rather than single points of dependency, as redundancy enables the system to survive environmental disruption that would otherwise cause cascading failure.
Evaluate cognitive agents by how they contribute to the portfolio's overall properties (diversification, correlation, coverage) rather than by individual performance alone, as portfolio-level risk emerges from relationships between components.
Maintain most cognitive agents as stable, low-maintenance foundations while reserving a small portion for high-variance experimental practices, as this barbell structure provides both reliability and the capacity for transformative breakthroughs.
Diversify cognitive agents across life domains rather than concentrating them in one area, as domain diversity ensures system-level functioning continues when any single domain experiences disruption.
Identify and eliminate shared dependencies across multiple agents during portfolio review, as correlated failure points create systemic fragility that individual agent optimization cannot address.
Make context switching costs visible through deliberate tracking before attempting to reduce them, because the tax is invisible in subjective experience and most people dramatically underestimate how frequently they switch.
Assess agents using frequency-times-behavioral-delta (how often they fire × how much they change your actions) rather than perceived importance, as most perceived-important agents prove to be high-frequency noise generators upon measurement.
Retire cognitive agents based on current value production rather than sunk investment, as maintaining low-impact agents due to past effort compounds resource drain through the endowment effect and status quo bias.
Regular audits of recurring cognitive processes identify agents that persist through inertia rather than current utility, enabling resource reallocation before accumulation costs compound.
Articulating the specific outcome an agent produces and when you last used that outcome distinguishes active value-generators from legacy processes running on autopilot.
Estimate agent costs in three currencies—time, attention, and opportunity—because the attention cost typically exceeds the time cost and determines actual system load.
Documentation must be updated in the same commit or editing session as behavioral changes, not as a separate follow-up task, because manual synchronization steps that can be skipped will be skipped.
Documentation should disclose constraints and failure modes, not just capabilities, because users discover limitations through failure when documentation omits them.
Coordination costs between agents grow quadratically (n(n-1)/2) while value grows linearly, establishing a natural upper bound on portfolio size before overhead exceeds benefit.
Allocate attention to agents based on lifecycle stage rather than salience, because deployment-stage agents need calibration attention while mature agents need minimal monitoring.
Recognize that expert-stage agents feel boring precisely because automaticity signals mastery—boredom with an agent is often evidence of successful maturation, not need for retirement.
Identify your biological prime time through multi-day tracking of hourly energy, focus, and motivation ratings rather than relying on self-perception, because subjective beliefs about peak performance times are systematically unreliable.
Retired agents compost into design knowledge for future agents—document what was learned, why it was retired, and what conditions made it obsolete, so nothing is lost.
Active unlearning (deliberate identification and retirement of obsolete knowledge) differs from passive forgetting and requires explicit lifecycle management to execute cleanly.
Insert an evaluation pause between receiving external input and adopting a conclusion, using that gap to explicitly assess evidence and reasoning against your own knowledge rather than defaulting to source credibility.