The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Implement tiered alert thresholds (warning/action) to provide lead time on degrading performance before catastrophic failure.
Calibrate threshold tightness to the cost ratio of false alarms versus misses rather than perfectionism or comfort.
Maintain signal-to-noise ratio above approximately 80% (at least four true positives per false positive) to prevent alert fatigue and habituation.
Monitor trends across time windows rather than point-in-time snapshots to detect gradual degradation before threshold failures occur.
Build system trust through repeated experience of the complete loop (capture → process → retrieve → act) rather than through system sophistication, as trust is psychological not architectural.
Calculate moving averages over fixed observation windows to filter noise and reveal underlying signal direction in performance metrics.
Compare performance across adjacent time windows to detect directional change without requiring visualization infrastructure.
Track rate of change in metrics to determine urgency of intervention, not just direction of change.
Distinguish seasonal patterns from genuine trends to avoid intervening in normal cyclical variation.
Store dated observations rather than relying on memory to enable retrospective trend analysis.
Reduce monitoring signals to fewer than can saturate attention, even when this means discarding potentially useful metrics.
Review monitoring system design quarterly to audit whether tracked metrics still correlate with outcomes that actually matter.
When a signal is missed, reduce total monitoring volume rather than adding new alerts to an already-saturated system.
Establish baseline periods before implementing changes to distinguish intervention effects from temporal variation.
Document comparative advantage profiles showing which agent excels on which dimensions rather than declaring absolute winners.
Remove high-salience stimuli from your environment before beginning focused work to reduce involuntary attentional capture and the cognitive cost of suppression.
Replace vanity metrics that cannot guide action with actionable metrics that have clear causal links to controllable variables.
Log configuration changes and outcomes to build experiment history that transforms optimization from guesswork to pattern recognition.
Maintain qualitative observation alongside quantitative metrics to detect when metric improvement correlates with outcome degradation.
Direct optimization effort exclusively at the system's current constraint, because improvements to non-constraints produce local gains that do not propagate to system-level output.
Before investing resources to expand constraint capacity, extract maximum throughput from the constraint as it currently exists by eliminating waste, idle time, and unnecessary overhead at the bottleneck.
Configure all non-constraint processes to optimize the constraint's throughput rather than their own local efficiency, because subordinating non-constraints to constraints increases system output while optimizing non-constraints does not.
After successfully improving a constraint, immediately re-identify the system's new bottleneck, because constraint improvement shifts the limiting factor to a different component.
Measure actual time consumed by each system component before optimizing, because subjective experience of difficulty does not correlate reliably with objective constraint status.