The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Monitoring overhead: the cognitive, time, and emotional cost of observing and tracking agent performance that must be justified by the value it provides in enabling decisions or actions
Self-monitoring: the practice of systematically observing, recording, and analyzing one's own cognitive agents, behaviors, and mental processes to identify patterns, evaluate performance, and enable targeted improvements
Monitoring journal: a structured written instrument used to record and analyze the performance of cognitive agents, including agent activation status, effectiveness ratings, and contextual factors to reveal patterns invisible through casual observation
Reflection-on-action: the retrospective process of reviewing past behavior to reconstruct reasoning, evaluate outcomes against intentions, and identify opportunities for learning and improvement
Accountability loop: the feedback mechanism created when monitoring an agent produces data that makes the observer a stakeholder in the agent's performance, generating commitment and responsibility through reactivity, commitment-consistency, and observation effects
Reactivity: the phenomenon where self-monitoring alters the behavior being monitored through mechanisms including self-regulation, cuing, and external consequence activation
Commitment-consistency loop: the psychological mechanism where making an explicit commitment to monitor an agent creates internal pressure to behave consistently with that commitment, amplified by written, public, and active forms of commitment
Alert threshold: a predefined numerical boundary that distinguishes normal operational performance from conditions requiring attention or investigation, established in advance of observing deviations.
Trend analysis: the practice of examining how metrics change over time to detect gradual degradation or improvement patterns that would be invisible in point-in-time assessments, enabling early intervention before thresholds are crossed.
Common-cause variation: the inherent, natural fluctuation within a stable system that produces normal performance variations without requiring investigation or intervention, distinguishable from special-cause variation through trend analysis.
Monitoring fatigue: the cognitive phenomenon where excessive monitoring signals overwhelm attentional capacity, leading to systematic ignoring of meaningful alerts due to attentional saturation and signal dilution, with the result that monitoring systems designed to prevent missed signals instead cause critical signals to be missed
Signal-to-noise ratio: the proportion of monitoring alerts that are genuine and actionable versus those that are false, irrelevant, or redundant, used as the primary metric for evaluating the effectiveness of a monitoring system
A/B testing: a controlled experimental method for comparing two versions of a system by running them simultaneously under equivalent conditions and measuring their performance against pre-defined metrics to determine which performs better based on data rather than intuition
Agent versus agent comparison: the A/B testing approach applied to cognitive agents where two competing agents serving the same function are compared on shared metrics to reveal which excels on which dimension and to identify trade-offs between agents
Monitoring: the systematic collection and analysis of data about agent performance and environmental conditions to inform decision-making and optimization
Observation theater: the performance of paying attention to monitoring data without the substance of acting on what attention reveals
OODA loop: a decision-making framework consisting of Observe, Orient, Decide, and Act phases that enables adaptive systems to continuously improve through rapid feedback cycles
Actionable metric: a monitoring metric that demonstrates clear cause and effect, is accessible to those who need to act on it, and is auditable for accuracy
Data-informed decision making: a decision-making approach that treats monitoring data as essential input but not sole input, integrating quantitative metrics with contextual knowledge and qualitative judgment
Plan-Do-Check-Act (PDCA) cycle: a four-step scientific process for iterative improvement consisting of Plan (identify gap and hypothesize change), Do (implement change on small scale), Check (measure results), and Act (adopt, modify, or abandon change based on results)
Bottleneck-first optimization: the practice of identifying and improving only the constraint in a system, since improvements at non-constraint steps do not change system output
Compounding: the multiplicative accumulation of small, consistent improvements that build upon each other to produce exponential growth over time, where each iteration enhances the effectiveness of subsequent iterations through system interactions and retained gains
Marginal gain: a small, measurable improvement (typically 1%) made to a specific component or dimension of a system that, when accumulated across many such improvements, produces exponential growth through compounding effects
Optimization: the process of improving a system, process, or outcome by systematically adjusting inputs or parameters to maximize performance within constraints, where each successive improvement requires exponentially more effort to achieve proportionally smaller gains due to diminishing returns.