The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Memory reconstructs rather than faithfully stores prior beliefs, systematically shifting them toward alignment with known outcomes (hindsight bias), making genuine learning from unrecorded predictions impossible and requiring external calibration systems to align subjective confidence with objective accuracy.
Humans exhibit systematic overconfidence across domains, with subjective confidence consistently exceeding objective accuracy in three distinct forms—overestimation of absolute performance, overplacement relative to others, and overprecision of confidence intervals—that behave differently across task difficulty levels.
Calibration develops from domain-specific feedback loops that provide rapid, unambiguous outcome information after predictions, and does not transfer automatically across domains.
When a belief revises three or more times in a short period without converging, treat this as a diagnostic signal that you are reacting to surface events rather than updating a deeper model.
When using AI meeting transcription, continue taking personal compressed notes during the conversation rather than relying solely on transcripts—then review AI summary against your notes afterward to identify gaps and improve real-time capture calibration.
Rate your decision quality, comprehension speed, and emotional regulation daily on a 1-5 scale across five consecutive workdays to detect attention debt accumulation before subjective awareness registers the degradation.
Before reviewing any attention tracking data, write explicit predictions about your time allocation percentages across categories, then calculate prediction-reality gaps to identify your largest attention blind spots.
After measuring five days of actual focused work time, use the daily average (not the best day or hoped-for number) as your baseline planning capacity for all future scheduling decisions.
During capacity measurement, rate output quality at the end of each work block (strong/acceptable/weak) and identify the cumulative hour mark where strong output stops as your effective capacity ceiling for planning purposes.
Track weekly buffer consumption rate—if consistently consuming more than 80% of buffer, increase buffer size; if consistently consuming less than 20%, buffer can be tightened.
When capacity building, increase target output by 10% per week only if quality metrics held steady or improved AND you met target on at least 4 of 5 days in the previous week.
When building capacity from a new baseline, measure current honest output over at least three representative days—not best days or aspirational targets—to establish accurate starting point.
When confidence in a technical conclusion exceeds 8/10, treat that high confidence as a trigger to increase scrutiny and deliberately search for disconfirming evidence rather than reducing verification effort.
Before optimizing around a perceived positive pattern, verify through deliberate removal tests whether the pattern persists when suspected causal factors are absent.
Before building optimization systems around a personal correlation, test whether the correlation survives when you control for potential confounding variables through deliberate experimental variation.
When a pattern appears to reverse across subgroups in your data, disaggregate by relevant context variables (sleep, stress, social setting) before drawing conclusions from the aggregate pattern.
Before concluding a pattern is meaningful, verify it survives three independent filters: sample size check (occurrences vs. opportunities), base rate comparison (frequency vs. background rate), and alternative explanation generation (minimum two alternatives).
Run a daily urgency log for one week, recording every urgent-feeling demand with timestamp, then scoring each on actual time-sensitivity and impact-if-delayed-two-hours to build calibration data on false urgency rates.
For each important outcome you care about, identify one lagging indicator (the outcome) and pair it with 1-2 leading indicators (upstream behaviors that predict it), tracking both to validate the predictive relationship.
When a leading indicator improves but its paired lagging outcome does not follow within the expected timeframe, treat the leading indicator as broken (gamed, confounded, or non-predictive) and replace it.
When making frequency or probability estimates, pause and ask: 'Am I estimating actual frequency, or how easily I can picture this?' then look up the base rate before deciding.
Before any consequential decision, populate two mandatory fields: the recent event driving current feeling, and the base rate/historical trend across the full relevant time window.
When a vivid individual case makes you feel certain about probability, explicitly ask 'what is the actual frequency of this event in the relevant population?' before forming any judgment.
After accumulating 15-20 judgments in the same domain, analyze whether errors cluster directionally (bias requiring correction factor) or scatter randomly (noise requiring aggregation).