The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
When stuck on a problem, write about being stuck by describing the problem, what you've tried, what you expected versus what happened, as the narrative structure itself often produces resolution by the third paragraph.
In blameless postmortems, frame questions as 'what was the system state at [time]' and 'what information was available' rather than 'who caused this' to shift from evaluation to observation and enable information sharing.
Before acting on snap judgments during debugging or incident response, read system logs and dashboards for five minutes without proposing theories to prevent hypothesis anchoring from corrupting observation.
After any operational failure, write a blameless post-mortem using five questions: what happened (factual description), what was the timeline of contributing events, what were the systemic factors, what are the action items (specific system changes), and what would have prevented this.
In technical postmortems, use 'how' questions ('how did the deployment occur') rather than 'why' questions ('why did you skip review') because 'how' elicits description while 'why' elicits defensive justification.
When debugging with strong initial hypotheses about root cause, deliberately search logs for evidence that would falsify the hypothesis rather than confirm it, to counteract confirmation bias in data collection.
During incident response, enforce a mandatory 5-minute observation period where team members only report dashboard data and log patterns before anyone proposes a causal theory.
Before committing to a hypothesis about a bug's cause, write one sentence completing 'What would I expect to see if I were wrong?' then specifically search for that evidence before continuing the investigation.
When confidence in a technical conclusion exceeds 8/10, treat that high confidence as a trigger to increase scrutiny and deliberately search for disconfirming evidence rather than reducing verification effort.
When encountering a familiar system or codebase, force yourself to describe what you observe using only concrete sensory details for 10 minutes before applying any evaluative categorization or pattern labels.
When a system reports all-green status but something feels wrong, immediately check for missing log streams, absent metrics, or silent services rather than trusting the presence of positive signals alone.
When reviewing code or data, trace actual execution paths or data trends variable-by-variable rather than pattern-matching from function names or headline numbers, because the gap between assumed behavior and actual behavior is where critical issues hide.
When drafting incident postmortems or failure analyses, complete the timeline of observable events (with timestamps and measurements) before writing any causal analysis, because mixing observation and explanation during collection produces defensive filtering.
When the same symptom triad precedes system failures across three independent incidents, document it as a named detection pattern and build an automated alert triggered by that specific combination.
Write blockers in the form 'I cannot [specific action] because [specific obstacle]' immediately upon noticing friction to convert ill-structured problems into solvable ones.
When stuck on a problem for more than 30-60 minutes at your current level of abstraction, force yourself to spend at least 5-10 minutes at an adjacent level (one step more abstract or one step more concrete) before continuing.
Validate each atomic component of a compound schema independently before trusting the complete structure, because compound failures provide no diagnostic information about which component broke.
When a designed agent fails to fire consistently after two weeks, diagnose whether the trigger is not salient enough, the condition is too restrictive, or the action requires too much effort, because each failure type requires different corrections.
When reverse-engineering a default agent, write down all three components (trigger, condition, action) even if the condition is 'always' or appears absent, because making the implicit condition explicit reveals where the default fires indiscriminately.
When an agent fires below 80% of expected opportunities over 30 days, reduce it to the simplest executable version before adding any complexity, because unreliable agents cannot be improved through sophistication.
Maintain a failure log where every agent misfire is recorded with date, agent name, what happened, and hypothesis about why, then review weekly to extract patterns.
When an agent fails, diagnose which component broke—trigger (never activated), condition (activated but context wasn't right), or action (executed but too vague/complex)—before attempting any redesign.
Fix only one component (trigger, condition, or action) per agent iteration rather than redesigning multiple components simultaneously, to maintain causal attribution of what changes produced which effects.
When a trigger has not fired successfully in one week despite being needed, redesign it immediately by changing at least one of: salience, timing, modality, or context specificity.