The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Track agent outcome accuracy separately from execution speed, classifying errors as systematic bias or random noise to determine appropriate correction mechanisms.
When agents reach optimization ceilings within their current framework, replace the framework entirely rather than continuing incremental refinement, as further optimization produces negligible returns while framework replacement enables order-of-magnitude improvements.
Allocate cognitive surplus to exploring alternative frameworks when capacity is high, and exploit proven frameworks when capacity is constrained, rather than maintaining fixed exploration rates regardless of context.
Deliberately accept temporary performance decreases by exploring different frameworks when stuck at local optima, as hill-climbing within current constraints cannot reach higher peaks elsewhere in the solution space.
Execute continuous incremental improvement (kaizen) within frameworks while maintaining meta-awareness to recognize when frameworks themselves need replacement (kaikaku), as the two modes serve complementary rather than competing functions.
Name the implicit framework governing each agent to make architectural assumptions visible for evaluation, as unexamined frameworks feel like reality rather than choices.
Design your attention's choice architecture by changing defaults, removing triggers, and restructuring physical and digital spaces rather than relying on willpower to resist distractions in unchanged environments.
Dedicate focused, time-boxed sessions to improving one specific agent rather than attempting continuous unfocused improvement across all systems simultaneously.
Design optimization sessions to create moderate activation pressure through deadlines—too little produces apathy, too much produces anxiety, optimal performance occurs between these extremes.
Hold measurement protocol constant across before-and-after conditions while varying only the system being optimized—changing both simultaneously eliminates your ability to attribute effects to causes.
Benchmark efficiency, accuracy, and quality dimensions together rather than only measuring what is easy—optimization that improves measurable proxies while degrading unmeasured outcomes is regression disguised as progress.
Record your rationale for each optimization before observing its outcome—this pre-commitment defeats hindsight bias and preserves your actual reasoning for later calibration.
Log failures with equal rigor to successes—changes that produced no effect or made things worse constrain your search space more than successes that might be luck.
Resist optimizing systems that have not yet generated enough performance data to reveal their actual bottlenecks—premature optimization adds complexity to components that may not matter.
Adjust systems when environmental context shifts even if nothing has broken—what was optimal under yesterday's conditions becomes suboptimal as the world changes, independent of system defects.
Distribute optimization responsibility throughout the system rather than treating it as a separate specialized role—every operator should simultaneously use and refine their processes.
Design agents with explicit retirement criteria at creation time — specify in advance what conditions would indicate the agent should be revised or replaced.
Pre-commit each day's attention allocation by deciding your most important task before opening any device or application, overriding the default mode network's tendency toward reactive stimulus response.
Scale new behaviors down to their minimum viable execution during initial deployment, then expand scope only after the activation circuit is reliable.
Design agents for the version of yourself who will encounter the trigger under actual conditions (tired, distracted, stressed) rather than the idealized version making the plan.
Encode new behaviors in environmental structure through specific physical cues, timing, and symbolic markers rather than relying on intention alone.
Expect and plan for an implementation dip where performance temporarily degrades when deploying a new agent, and continue execution through this phase rather than abandoning the agent.
Treat missing a single execution of a new habit as data rather than failure, and resume the next day without restarting the deployment timeline.
Schedule maintenance reviews at frequencies matched to the rate of environmental change in each agent's domain rather than using a uniform review cadence.