The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Effectiveness: the degree to which an agent produces the intended outcome, as measured by whether the intended change in the world actually occurs when the agent fires
Time-to-fire: the latency interval between when a cognitive agent's trigger condition is detected and when the agent's response is initiated, measured as the temporal distance between trigger awareness and activation awareness
False positive rate: the proportion of cognitive agent activations that occur when the trigger condition is not actually present, calculated as false positives divided by total activations
False negative rate: the proportion of actual positive cases that a detection system fails to identify, measured as the complement of recall (FNR = 1 - Recall), where a false negative occurs when an agent fails to fire when it should, leaving the system exposed to undetected problems
Agent retirement: the deliberate identification and removal of cognitive agents that have become obsolete or no longer serve their intended purpose due to environmental changes, representing a form of active unlearning
Monitoring overhead: the cognitive, time, and emotional cost of observing and tracking agent performance that must be justified by the value it provides in enabling decisions or actions
Self-monitoring: the practice of systematically observing, recording, and analyzing one's own cognitive agents, behaviors, and mental processes to identify patterns, evaluate performance, and enable targeted improvements
Monitoring journal: a structured written instrument used to record and analyze the performance of cognitive agents, including agent activation status, effectiveness ratings, and contextual factors to reveal patterns invisible through casual observation
Reflection-on-action: the retrospective process of reviewing past behavior to reconstruct reasoning, evaluate outcomes against intentions, and identify opportunities for learning and improvement
Accountability loop: the feedback mechanism created when monitoring an agent produces data that makes the observer a stakeholder in the agent's performance, generating commitment and responsibility through reactivity, commitment-consistency, and observation effects
Reactivity: the phenomenon where self-monitoring alters the behavior being monitored through mechanisms including self-regulation, cuing, and external consequence activation
Commitment-consistency loop: the psychological mechanism where making an explicit commitment to monitor an agent creates internal pressure to behave consistently with that commitment, amplified by written, public, and active forms of commitment
Alert threshold: a predefined numerical boundary that distinguishes normal operational performance from conditions requiring attention or investigation, established in advance of observing deviations.
Trend analysis: the practice of examining how metrics change over time to detect gradual degradation or improvement patterns that would be invisible in point-in-time assessments, enabling early intervention before thresholds are crossed.
Common-cause variation: the inherent, natural fluctuation within a stable system that produces normal performance variations without requiring investigation or intervention, distinguishable from special-cause variation through trend analysis.
Monitoring fatigue: the cognitive phenomenon where excessive monitoring signals overwhelm attentional capacity, leading to systematic ignoring of meaningful alerts due to attentional saturation and signal dilution, with the result that monitoring systems designed to prevent missed signals instead cause critical signals to be missed
Signal-to-noise ratio: the proportion of monitoring alerts that are genuine and actionable versus those that are false, irrelevant, or redundant, used as the primary metric for evaluating the effectiveness of a monitoring system
A/B testing: a controlled experimental method for comparing two versions of a system by running them simultaneously under equivalent conditions and measuring their performance against pre-defined metrics to determine which performs better based on data rather than intuition
Agent versus agent comparison: the A/B testing approach applied to cognitive agents where two competing agents serving the same function are compared on shared metrics to reveal which excels on which dimension and to identify trade-offs between agents
Monitoring: the systematic collection and analysis of data about agent performance and environmental conditions to inform decision-making and optimization
Observation theater: the performance of paying attention to monitoring data without the substance of acting on what attention reveals
OODA loop: a decision-making framework consisting of Observe, Orient, Decide, and Act phases that enables adaptive systems to continuously improve through rapid feedback cycles
Actionable metric: a monitoring metric that demonstrates clear cause and effect, is accessible to those who need to act on it, and is auditable for accuracy
Data-informed decision making: a decision-making approach that treats monitoring data as essential input but not sole input, integrating quantitative metrics with contextual knowledge and qualitative judgment