The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Maintain qualitative observation alongside quantitative metrics to detect when metric improvement correlates with outcome degradation.
Direct optimization effort exclusively at the system's current constraint, because improvements to non-constraints produce local gains that do not propagate to system-level output.
Before investing resources to expand constraint capacity, extract maximum throughput from the constraint as it currently exists by eliminating waste, idle time, and unnecessary overhead at the bottleneck.
Configure all non-constraint processes to optimize the constraint's throughput rather than their own local efficiency, because subordinating non-constraints to constraints increases system output while optimizing non-constraints does not.
After successfully improving a constraint, immediately re-identify the system's new bottleneck, because constraint improvement shifts the limiting factor to a different component.
Measure actual time consumed by each system component before optimizing, because subjective experience of difficulty does not correlate reliably with objective constraint status.
Direct each optimization iteration at the single variable identified as the current bottleneck, then measure results before changing additional variables, because simultaneous multi-variable changes prevent causal attribution.
Define success criteria and measurement methods before implementing any optimization change, because post-hoc evaluation enables unconscious redefinition of success to match whatever occurred.
Schedule cognitively demanding work during your measured peak attention window and batch shallow work outside it to maximize output quality per unit of effort invested.
When monitoring data shows stable satisfactory performance with no identifiable bottleneck, redirect optimization effort to a different system rather than continuing to optimize the current one.
Implement each optimization change on a small, contained scale before system-wide deployment, because localized testing isolates effects and limits damage from failed experiments.
Periodically examine and revise the foundational assumptions underlying your optimization efforts, not just the tactical adjustments within those assumptions, because paradigmatic constraints often limit throughput more than process constraints.
Measure where the system actually breaks down before allocating optimization effort—optimizing components that account for a small fraction of total constraint yields negligible improvement regardless of optimization quality.
Establish explicit stopping criteria before beginning any optimization effort, specifying the threshold that defines 'good enough' and the action that occurs when that threshold is met.
As time remaining until deployment or deadline decreases, shift optimization effort from exploring alternative approaches to exploiting the best known approach, because exploration costs cannot be recouped over short time horizons.
Allocate optimization effort proportionally to the improvement each component can still yield rather than proportionally to each component's current performance level, because low-performing components may already be near their ceiling while high-performing components may have headroom.
Pre-register evaluation criteria before running experiments to prevent post-hoc rationalization from contaminating result interpretation.
Change only one variable at a time when testing improvements so that observed effects can be attributed to specific causes rather than confounded with simultaneous changes.
When multiple changes are made simultaneously, systematically remove each component individually to determine its contribution, rather than assuming all components contribute equally or that the most salient component is most important.
Batch information consumption into scheduled windows rather than processing it continuously, because task-switching destroys flow states and imposes recovery costs far exceeding the interruption duration.
Record each isolated change and its measured effect to build a searchable optimization history that enables bisection debugging when performance degrades.
Eliminate unnecessary process steps entirely before attempting to make necessary steps faster, as structural removal produces larger gains than incremental speed improvements.
Reduce agent execution time to lower the motivation threshold required for initiation, as faster behaviors fire more frequently at lower motivation levels than slower ones producing identical outputs.
Measure agent execution frequency after speed optimization, not just duration, as the primary value of faster agents is increased firing rate rather than time savings per execution.