The irreducible epistemic atoms underlying the curriculum. 2,888 atoms across 3 types and 2 molecules
After explaining a system to someone with zero context, revisit any point where you said 'obviously' or 'of course' and document the unstated assumption those words concealed.
Set three random timers throughout your workday; when each fires, pause for 30 seconds to scan jaw, shoulders, chest, stomach, and hands for tension, rating each 1-5 and noting your current activity.
When a system reports all-green status but something feels wrong, immediately check for missing log streams, absent metrics, or silent services rather than trusting the presence of positive signals alone.
When you feel chest tightening or jaw tension during a design review or technical discussion, write down 'I notice I want to protect [X]' before formulating your response to create separation between observation and defense.
Before sending difficult emails or presenting challenging conclusions, run your draft through fact-story filtering by asking which statements would survive if you had to prove them with timestamps, screenshots, or measurements, because this prevents narrative from masquerading as evidence.
When reviewing code or data, trace actual execution paths or data trends variable-by-variable rather than pattern-matching from function names or headline numbers, because the gap between assumed behavior and actual behavior is where critical issues hide.
After any event producing strong reactions, spend 90 seconds recording observations in a two-column format (left: camera-recordable facts, right: interpretations) before analysis, because this separation prevents retroactive rewriting of evidence to fit conclusions.
When drafting incident postmortems or failure analyses, complete the timeline of observable events (with timestamps and measurements) before writing any causal analysis, because mixing observation and explanation during collection produces defensive filtering.
Before prompting AI to analyze meeting transcripts or documents, explicitly request separated outputs: first section lists only observable facts without interpretation, second section offers interpretations of those observations.
In team contexts where observation must be separated from evaluation, make the phase transition explicit through verbal announcement ('We are now switching from observation mode to evaluation mode'), because implicit transitions allow the two modes to collapse into each other despite individual intentions.
When you encounter the same judgment arising across three or more different contexts, treat it as a structural cognitive habit requiring explicit examination rather than a series of independent assessments, because cross-context repetition indicates the judgment is executing from pattern-matching rather than situation-specific analysis.
For habitual judgments that fire too quickly to intercept before they execute, implement a marking protocol where you log each occurrence with 'HJ' notation immediately after noticing it, because marking converts the judgment from invisible background process to observable object without requiring the neurologically impossible task of preventing automatic activation.
When noticing a habitual judgment about a person or group, apply the substitution test by asking whether you would make the same evaluation if a different person or group were in the identical situation—if the answer is uncertain or negative, you have detected a judgment running on identity rather than behavior.
When a judgment forms during an interaction, immediately convert it into a genuine question by replacing the evaluative conclusion with an inquiry about constraints, context, or reasoning ('Why would anyone write it this way?' becomes 'What constraints or context led to this approach?'), because questions activate exploratory cognition while conclusions activate confirmatory cognition.
In code reviews or technical evaluations, frame feedback requests as requests for reasoning rather than requests for justification—ask 'What was your reasoning?' instead of 'Why did you do it this way?'—because the first activates analytical explanation while the second activates defensive explanation.
When building new observation skills, construct a five-level difficulty hierarchy ranging from trivial annoyances to identity-level triggers, then spend minimum one week at each level before progressing, because attempting high-stakes observation without low-stakes mastery produces reversion to automatic judgment under pressure.
When practicing observation at any difficulty level, stay at that level until the trigger becomes boring rather than advancing on a fixed schedule, because boredom signals that automatic judgment has been replaced by automatic observation at that complexity level.
When using AI to practice observation skills, provide it with your written accounts of charged situations and explicitly request separation of observational statements from evaluative statements, using the AI's output as immediate feedback on which judgments you embedded without noticing.
When conducting week-long observation audits of high-stakes domains, structure daily entries with physically separated sections for 'what I observed' and 'what my mind wanted to conclude,' keeping both sections visible simultaneously to train recognition of the observation-evaluation gap.
Before using AI for pattern analysis on observational data, ensure your input consists of descriptive observations rather than evaluative conclusions by applying the camera test to each input statement, because AI analyzing your conclusions produces confirmation of your biases rather than structural insights.
When a behavior, reaction, or outcome recurs three or more times across a 30-day period, classify it as a pattern candidate requiring structural analysis rather than treating it as coincidence.
Record each occurrence of a suspected pattern with date and context in a dedicated log before drawing conclusions, because memory-based frequency assessment systematically overestimates recurrence through the frequency illusion.
When the same symptom triad precedes system failures across three independent incidents, document it as a named detection pattern and build an automated alert triggered by that specific combination.
Name behavioral patterns using 2-4 word descriptors that compress the full trigger-response sequence into a recognizable label, enabling real-time pattern recognition under cognitive load.