The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Conduct a two-week bias journal recording significant judgments with confidence levels, then categorize errors by direction and type to build your personal bias profile.
For each identified bias in your profile, write a specific pre-correction question or procedure to execute before acting on judgments in that domain.
After identifying that you are systematically overconfident on timelines by X%, multiply your initial timeline estimates by (1 + X/100) before stating them publicly.
After four weeks of belief tracking, examine whether beliefs barely moved despite evidence (conservatism) or swung dramatically on single data points (base rate neglect) to identify domain-specific updating patterns.
Record decision context at the moment of commitment using five elements: (1) decision statement, (2) forces/constraints/emotions active at choice point, (3) expected consequences with timeline, (4) confidence level 1-10, (5) review trigger date—before hindsight bias can rewrite your reasoning.
When reviewing a past decision, read the original context record before evaluating the outcome, because evaluating outcome first allows hindsight bias to contaminate your assessment of whether the reasoning was sound.
Set the threshold for decision context documentation at any choice where you deliberated between options for more than sixty seconds, because if you considered alternatives consciously, the reasoning is worth preserving against memory reconstruction.
Before evaluating any past decision, reconstruct the information environment that existed at decision time using contemporaneous records rather than memory, then evaluate the decision against that environment only.
For information arriving through multiple transmission steps (forwarded quotes, summarized studies, dashboard metrics), multiply the confidence value at each transmission step rather than treating endpoint confidence as equal to source confidence.
Perform schema inspection through a five-step audit: (1) list actual operating rules in a domain, (2) source each rule's origin, (3) identify one success and one failure case per rule, (4) rate confidence in each rule, (5) compare confidence to evidence quality.
Ensure that highest-priority items constitute less than 20% of total backlog; if more items are marked critical, recalibrate threshold definitions to restore differentiation.
Treat a validation log with no disconfirmations as a warning signal of selective documentation rather than validation success, because unbiased testing inevitably produces some surprises.
When evaluating confidence in a belief, count only genuinely independent lines of evidence—sources that do not share origins, methods, or assumptions—rather than total source count, because correlated sources compound confidence on a single foundation.
When using indirect evidence, assess whether indicators are genuinely independent by checking if they could agree for reasons other than the schema being true—if all evidence shares a common causal source, it counts as single evidence despite multiple data points.
After accumulating multiple validation records, analyze them as a set to identify recurring patterns in how your schemas fail—same blind spot, same overconfidence direction, same boundary condition—that no single test reveals.
When validating schemas about personal capability or performance, include external observer ratings alongside self-assessment to detect systematic overconfidence blind spots that introspection cannot reveal.
Set prediction failure thresholds as numeric ratios (X failures out of Y recent predictions) for each schema before observing prediction outcomes to trigger review when the threshold is crossed.
Write schema evolution log entries with four mandatory fields - date, schema affected in original language not current interpretation, specific triggering evidence or encounter, and the replacement belief - to defeat hindsight bias through fixed external records.
In your schema inventory, require behavioral proof by identifying three decisions from the last month that each schema governed—if you cannot find three, reclassify the schema as aspirational rather than operational.
When planning task duration, deliberately switch from inside-view scenario construction to outside-view base-rate consultation by asking 'how long have similar tasks taken?' instead of 'how long will this take?'
When your explanation of your own behavior differs from an external observer's explanation by more than surface framing, treat the divergence as high-confidence evidence of a metacognitive blind spot requiring investigation.
Start behavioral triggers with a more conservative threshold than feels right, aiming for 3-5 activations per day rather than 30, to build trust through relevance before expanding sensitivity.
Log each trigger firing for one week as true positive or false positive, then adjust the threshold only after accumulating empirical data rather than based on single instances.
Add qualifying guard clauses to behavioral triggers when false positives exceed 30% of total activations, inserting context checks that must pass before the main action executes.