The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Direct each optimization iteration at the single variable identified as the current bottleneck, then measure results before changing additional variables, because simultaneous multi-variable changes prevent causal attribution.
Define success criteria and measurement methods before implementing any optimization change, because post-hoc evaluation enables unconscious redefinition of success to match whatever occurred.
Schedule cognitively demanding work during your measured peak attention window and batch shallow work outside it to maximize output quality per unit of effort invested.
When monitoring data shows stable satisfactory performance with no identifiable bottleneck, redirect optimization effort to a different system rather than continuing to optimize the current one.
Implement each optimization change on a small, contained scale before system-wide deployment, because localized testing isolates effects and limits damage from failed experiments.
Periodically examine and revise the foundational assumptions underlying your optimization efforts, not just the tactical adjustments within those assumptions, because paradigmatic constraints often limit throughput more than process constraints.
Measure where the system actually breaks down before allocating optimization effort—optimizing components that account for a small fraction of total constraint yields negligible improvement regardless of optimization quality.
Establish explicit stopping criteria before beginning any optimization effort, specifying the threshold that defines 'good enough' and the action that occurs when that threshold is met.
As time remaining until deployment or deadline decreases, shift optimization effort from exploring alternative approaches to exploiting the best known approach, because exploration costs cannot be recouped over short time horizons.
Allocate optimization effort proportionally to the improvement each component can still yield rather than proportionally to each component's current performance level, because low-performing components may already be near their ceiling while high-performing components may have headroom.
Pre-register evaluation criteria before running experiments to prevent post-hoc rationalization from contaminating result interpretation.
Change only one variable at a time when testing improvements so that observed effects can be attributed to specific causes rather than confounded with simultaneous changes.
When multiple changes are made simultaneously, systematically remove each component individually to determine its contribution, rather than assuming all components contribute equally or that the most salient component is most important.
Batch information consumption into scheduled windows rather than processing it continuously, because task-switching destroys flow states and imposes recovery costs far exceeding the interruption duration.
Record each isolated change and its measured effect to build a searchable optimization history that enables bisection debugging when performance degrades.
Eliminate unnecessary process steps entirely before attempting to make necessary steps faster, as structural removal produces larger gains than incremental speed improvements.
Reduce agent execution time to lower the motivation threshold required for initiation, as faster behaviors fire more frequently at lower motivation levels than slower ones producing identical outputs.
Measure agent execution frequency after speed optimization, not just duration, as the primary value of faster agents is increased firing rate rather than time savings per execution.
Track agent outcome accuracy separately from execution speed, classifying errors as systematic bias or random noise to determine appropriate correction mechanisms.
When agents reach optimization ceilings within their current framework, replace the framework entirely rather than continuing incremental refinement, as further optimization produces negligible returns while framework replacement enables order-of-magnitude improvements.
Allocate cognitive surplus to exploring alternative frameworks when capacity is high, and exploit proven frameworks when capacity is constrained, rather than maintaining fixed exploration rates regardless of context.
Deliberately accept temporary performance decreases by exploring different frameworks when stuck at local optima, as hill-climbing within current constraints cannot reach higher peaks elsewhere in the solution space.
Execute continuous incremental improvement (kaizen) within frameworks while maintaining meta-awareness to recognize when frameworks themselves need replacement (kaikaku), as the two modes serve complementary rather than competing functions.
Name the implicit framework governing each agent to make architectural assumptions visible for evaluation, as unexamined frameworks feel like reality rather than choices.