The irreducible epistemic atoms underlying the curriculum. 2,888 atoms across 3 types and 2 molecules
Track your shallow-to-deep work ratio weekly and treat ratios exceeding 50% shallow as signals of structural calendar problems requiring role negotiation rather than personal discipline improvement.
Rate your decision quality, comprehension speed, and emotional regulation daily on a 1-5 scale across five consecutive workdays to detect attention debt accumulation before subjective awareness registers the degradation.
Use automated time-tracking tools alongside manual 30-minute increment logs simultaneously for three consecutive workdays to capture both digital activity and non-digital context that neither method alone reveals.
Before reviewing any attention tracking data, write explicit predictions about your time allocation percentages across categories, then calculate prediction-reality gaps to identify your largest attention blind spots.
When experiencing chronic difficulty with complex cognitive tasks across multiple domains simultaneously, diagnose for attention debt rather than domain-specific skill gaps, because attentional degradation produces domain-general impairment that mimics multiple independent deficiencies.
When giving feedback in code reviews or technical discussions, state observable facts (nesting levels, exit paths, line numbers) before applying evaluative labels to enable problem-solving rather than defensiveness.
In blameless postmortems, frame questions as 'what was the system state at [time]' and 'what information was available' rather than 'who caused this' to shift from evaluation to observation and enable information sharing.
Before acting on snap judgments during debugging or incident response, read system logs and dashboards for five minutes without proposing theories to prevent hypothesis anchoring from corrupting observation.
When receiving critical feedback, insert a physical pause (close laptop, stand up, or wait 90 seconds) before responding to allow prefrontal cortex engagement rather than amygdala-driven reaction.
In emotionally charged messages, draft your reactive response first in a private document, then wait 10 minutes before composing the actual message, using the comparison between versions as data about emotional distortion.
When using AI to draft difficult communications, compare your reactive draft against the AI-generated neutral version to measure where emotions are distorting your message, rather than sending the AI version directly.
Replace evaluative words that smuggle judgment ('interrupted,' 'ignored,' 'slammed') with camera-observable behavior descriptions ('began speaking while I was mid-sentence,' 'has not replied since Tuesday') in feedback conversations.
In technical postmortems, use 'how' questions ('how did the deployment occur') rather than 'why' questions ('why did you skip review') because 'how' elicits description while 'why' elicits defensive justification.
Before high-stakes observations (meetings, decisions, analyses), write down your current mood and strongest expectation about the outcome to make perceptual filters visible for later comparison against actual observations.
When debugging with strong initial hypotheses about root cause, deliberately search logs for evidence that would falsify the hypothesis rather than confirm it, to counteract confirmation bias in data collection.
During incident response, enforce a mandatory 5-minute observation period where team members only report dashboard data and log patterns before anyone proposes a causal theory.
After initial defensive emotional reaction to feedback, name the specific emotion with high granularity ('I notice frustration about the timeline comment, not the technical critique') before responding, to activate prefrontal regulation.
When using implementation intentions to create behavioral pauses, specify the triggering situation at high detail ('If I receive code review feedback challenging my approach...') rather than generically ('If I get criticism...') to increase cue detectability.
Before committing to a hypothesis about a bug's cause, write one sentence completing 'What would I expect to see if I were wrong?' then specifically search for that evidence before continuing the investigation.
When confidence in a technical conclusion exceeds 8/10, treat that high confidence as a trigger to increase scrutiny and deliberately search for disconfirming evidence rather than reducing verification effort.
Before any recurring meeting or code review, spend 5 minutes writing down what topics never get discussed, what people never speak, and what failure modes are never mentioned—listing at least five absences.
When encountering a familiar system or codebase, force yourself to describe what you observe using only concrete sensory details for 10 minutes before applying any evaluative categorization or pattern labels.
Before entering any design conversation or architecture review, write one sentence answering 'What am I assuming is already settled here?' to surface expertise-hidden assumptions that should be re-examined.
When a newcomer to a codebase or system asks a question you cannot immediately answer about code you wrote, treat their question as a defamiliarization signal requiring investigation rather than dismissing it as lack of context.