The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Humans automatically and unconsciously fuse observation with interpretation in sub-second timeframes, making the phenomenological separation of raw perception from inference cognitively effortful and often impossible without deliberate training.
Defer emotional interpretation to review sessions when multiple entries enable pattern recognition, rather than explaining emotions during initial capture.
Capture small, mundane surprises rather than filtering for 'important' ones, because small surprises reveal systematic blind spots that large surprises obscure.
When giving feedback in code reviews or technical discussions, state observable facts (nesting levels, exit paths, line numbers) before applying evaluative labels to enable problem-solving rather than defensiveness.
Before acting on snap judgments during debugging or incident response, read system logs and dashboards for five minutes without proposing theories to prevent hypothesis anchoring from corrupting observation.
Replace evaluative words that smuggle judgment ('interrupted,' 'ignored,' 'slammed') with camera-observable behavior descriptions ('began speaking while I was mid-sentence,' 'has not replied since Tuesday') in feedback conversations.
Before high-stakes observations (meetings, decisions, analyses), write down your current mood and strongest expectation about the outcome to make perceptual filters visible for later comparison against actual observations.
When debugging with strong initial hypotheses about root cause, deliberately search logs for evidence that would falsify the hypothesis rather than confirm it, to counteract confirmation bias in data collection.
During incident response, enforce a mandatory 5-minute observation period where team members only report dashboard data and log patterns before anyone proposes a causal theory.
Before any recurring meeting or code review, spend 5 minutes writing down what topics never get discussed, what people never speak, and what failure modes are never mentioned—listing at least five absences.
When encountering a familiar system or codebase, force yourself to describe what you observe using only concrete sensory details for 10 minutes before applying any evaluative categorization or pattern labels.
When a system reports all-green status but something feels wrong, immediately check for missing log streams, absent metrics, or silent services rather than trusting the presence of positive signals alone.
Before sending difficult emails or presenting challenging conclusions, run your draft through fact-story filtering by asking which statements would survive if you had to prove them with timestamps, screenshots, or measurements, because this prevents narrative from masquerading as evidence.
When reviewing code or data, trace actual execution paths or data trends variable-by-variable rather than pattern-matching from function names or headline numbers, because the gap between assumed behavior and actual behavior is where critical issues hide.
After any event producing strong reactions, spend 90 seconds recording observations in a two-column format (left: camera-recordable facts, right: interpretations) before analysis, because this separation prevents retroactive rewriting of evidence to fit conclusions.
When drafting incident postmortems or failure analyses, complete the timeline of observable events (with timestamps and measurements) before writing any causal analysis, because mixing observation and explanation during collection produces defensive filtering.
Before prompting AI to analyze meeting transcripts or documents, explicitly request separated outputs: first section lists only observable facts without interpretation, second section offers interpretations of those observations.
In team contexts where observation must be separated from evaluation, make the phase transition explicit through verbal announcement ('We are now switching from observation mode to evaluation mode'), because implicit transitions allow the two modes to collapse into each other despite individual intentions.
When building new observation skills, construct a five-level difficulty hierarchy ranging from trivial annoyances to identity-level triggers, then spend minimum one week at each level before progressing, because attempting high-stakes observation without low-stakes mastery produces reversion to automatic judgment under pressure.
When practicing observation at any difficulty level, stay at that level until the trigger becomes boring rather than advancing on a fixed schedule, because boredom signals that automatic judgment has been replaced by automatic observation at that complexity level.
When using AI to practice observation skills, provide it with your written accounts of charged situations and explicitly request separation of observational statements from evaluative statements, using the AI's output as immediate feedback on which judgments you embedded without noticing.
When conducting week-long observation audits of high-stakes domains, structure daily entries with physically separated sections for 'what I observed' and 'what my mind wanted to conclude,' keeping both sections visible simultaneously to train recognition of the observation-evaluation gap.
Before using AI for pattern analysis on observational data, ensure your input consists of descriptive observations rather than evaluative conclusions by applying the camera test to each input statement, because AI analyzing your conclusions produces confirmation of your biases rather than structural insights.
Record each occurrence of a suspected pattern with date and context in a dedicated log before drawing conclusions, because memory-based frequency assessment systematically overestimates recurrence through the frequency illusion.