The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Design your attention's choice architecture by changing defaults, removing triggers, and restructuring physical and digital spaces rather than relying on willpower to resist distractions in unchanged environments.
Dedicate focused, time-boxed sessions to improving one specific agent rather than attempting continuous unfocused improvement across all systems simultaneously.
Design optimization sessions to create moderate activation pressure through deadlines—too little produces apathy, too much produces anxiety, optimal performance occurs between these extremes.
Hold measurement protocol constant across before-and-after conditions while varying only the system being optimized—changing both simultaneously eliminates your ability to attribute effects to causes.
Benchmark efficiency, accuracy, and quality dimensions together rather than only measuring what is easy—optimization that improves measurable proxies while degrading unmeasured outcomes is regression disguised as progress.
Record your rationale for each optimization before observing its outcome—this pre-commitment defeats hindsight bias and preserves your actual reasoning for later calibration.
Log failures with equal rigor to successes—changes that produced no effect or made things worse constrain your search space more than successes that might be luck.
Resist optimizing systems that have not yet generated enough performance data to reveal their actual bottlenecks—premature optimization adds complexity to components that may not matter.
Adjust systems when environmental context shifts even if nothing has broken—what was optimal under yesterday's conditions becomes suboptimal as the world changes, independent of system defects.
Distribute optimization responsibility throughout the system rather than treating it as a separate specialized role—every operator should simultaneously use and refine their processes.
Design agents with explicit retirement criteria at creation time — specify in advance what conditions would indicate the agent should be revised or replaced.
Pre-commit each day's attention allocation by deciding your most important task before opening any device or application, overriding the default mode network's tendency toward reactive stimulus response.
Scale new behaviors down to their minimum viable execution during initial deployment, then expand scope only after the activation circuit is reliable.
Design agents for the version of yourself who will encounter the trigger under actual conditions (tired, distracted, stressed) rather than the idealized version making the plan.
Encode new behaviors in environmental structure through specific physical cues, timing, and symbolic markers rather than relying on intention alone.
Expect and plan for an implementation dip where performance temporarily degrades when deploying a new agent, and continue execution through this phase rather than abandoning the agent.
Treat missing a single execution of a new habit as data rather than failure, and resume the next day without restarting the deployment timeline.
Schedule maintenance reviews at frequencies matched to the rate of environmental change in each agent's domain rather than using a uniform review cadence.
Inspect agents during their stable operational phase specifically because they appear to be working, not after they fail — early detection of drift is cheaper than repair after collapse.
Separate performance maintenance (is the agent doing what it's designed to do) from alignment maintenance (should the agent still exist) and run them at different frequencies.
During maintenance reviews, strip accumulated cruft by removing steps, exceptions, and scope expansions that have made the agent more complex than its essential function requires.
When modifying an agent, evolution is preferable when the core architecture remains sound and the gap between current and needed state is small; replacement is preferable when core assumptions have shifted or the architecture cannot represent new requirements.
Batch similar shallow tasks together to minimize the cognitive overhead of task-type switching and reduce attention residue accumulation.
Test for sunk cost contamination in agent retirement decisions by asking whether you would design anything resembling the current agent if starting fresh today with the same need.