The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Post-action review: a structured observation performed after task completion that compares intended outcomes against actual results to identify structural causes of gaps and implement specific process changes, conducted within 48 hours of completion using four questions (what was supposed to happen, what actually happened, why was there a difference, what will I do differently) to extract reliable learning from experience
Error cascade: a chain of individually small errors that compound through a coupled system into a catastrophic outcome, where the severity of the cascade is determined by the coupling of the system, not the magnitude of the initial error
Graceful degradation: the design of systems to fail partially rather than completely, where systems continue to operate at reduced capacity when components fail, preserving core functions while maintaining structural integrity across multiple operating modes
Tight coupling: a system property where processes happen fast, sequences are invariant, and there is little slack or buffer between steps, causing failures to propagate automatically through dependent links without time for intervention or correction
Recovery procedure: a documented, pre-decided, pre-rehearsed sequence of specific steps that restore a process to working state after a known category of failure, designed to eliminate improvisation and replace panic with protocol
Blame instinct: the deeply wired human cognitive tendency to focus on identifying a specific person responsible for an error rather than understanding the systemic conditions that produced the error, which prevents learning and leads to ineffective responses
Systemic conditions: the structural, environmental, and process-level factors within an organization or personal cognitive infrastructure that make errors likely or probable, rather than individual character traits or temporary circumstances
Error pattern: a recurring sequence of similar mistakes that emerges when multiple instances of the same type of error are analyzed together, revealing structural weaknesses in systems rather than individual character flaws
System error: an error that originates from structural features of a system rather than individual worker performance, representing common cause variation that produces predictable patterns and requires systemic fixes rather than personal effort
Automated detection: the use of tools, systems, or processes that continuously monitor for specific error patterns without human intervention, leveraging consistent pattern-matching capabilities to identify errors that manual vigilance would miss due to biological constraints
Vigilance decrement: the predictable and universal degradation of sustained attention and error detection performance over time, resulting from the biological depletion of cognitive resources rather than character flaws or lack of motivation
Error correction: the cognitive and resource-intensive process of detecting, diagnosing, and fixing mistakes that consumes time, attention, and metabolic resources while producing measurable opportunity costs and context-switching overhead
Self-correcting system: a cognitive or behavioral architecture that automatically detects errors, diagnoses root causes, and executes corrective actions without requiring conscious human intervention or deliberate attention to trigger each correction
Scope collision: the structural inevitability that occurs when two cognitive agents operate simultaneously, their scopes intersect, and they issue incompatible instructions for the same situation
Shared state: the information that multiple agents can read from and write to during their operation, serving as the common ground that enables coordination between agents and functioning as the information substrate that allows agents to inform each other
Priority ordering: a context-specific ranked list of cognitive agents that determines which agent wins when two or more agents conflict over the same situation or resource, with higher-ranked agents taking precedence over lower-ranked agents during moments of collision
Agent sequencing: the explicit definition of the temporal order in which cognitive agents must execute to ensure that each agent receives the necessary outputs from preceding agents as inputs, where dependencies between agents determine the required execution order
Parallel versus sequential agent execution: the structural analysis of dependency that determines whether agents run simultaneously or must wait for prior results, with parallel execution enabling speedup and sequential execution maintaining correctness
Dependency graph: a Directed Acyclic Graph (DAG) representing tasks as nodes and directed edges showing which tasks must complete before others can begin, with tasks having no incoming edges starting immediately and the longest chain of dependent tasks determining the minimum project completion time
Agent communication protocol: a structured, explicit agreement about what information gets transmitted, in what format, with what metadata, and under what constraints, that defines the communicative act type, structured payload, context window, completion criteria, and acknowledgment mechanism between cognitive agents
Orchestrator agent: a meta-agent that coordinates other cognitive agents by deciding which should run when, operating at a different level of abstraction from the agents it coordinates, with visibility into system state, a policy for translating state into sequencing decisions, and authority to activate or suppress agents
Context hand-off: the structured transfer of cognitive state and information from one cognitive agent to another at the boundary where one agent finishes and another begins, requiring explicit packaging of output in a format that the receiving agent can immediately use
Deadlock: a structural coordination failure in multi-agent systems where two or more agents each wait for the other to release a resource, creating a circular dependency that prevents any agent from proceeding, with precise structural causes and solutions rather than motivational or personal failing
Coffman conditions: the four necessary and sufficient conditions for deadlock to occur in multi-agent systems—mutual exclusion, hold and wait, no preemption, and circular wait—which together form a diagnostic framework for identifying and preventing deadlock