The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Start probability estimates from the base rate and adjust proportionally to evidence strength, never from an implicit 50/50 prior, to avoid systematic overweighting of individual evidence.
When extending judgment or confidence from a high-calibration domain to a low-calibration domain, explicitly downgrade your confidence level and apply structured meta-cognitive protocols rather than relying on the portable feeling of expertise.
Frame pre-mortem exercises to legitimize dissent by making failure the official premise, removing social barriers to expressing doubt that standard planning meetings suppress.
For every significant belief you hold, explicitly specify the observation or evidence that would cause you to abandon it; if you cannot specify falsification conditions, treat the belief as epistemically suspect.
Generate at least two genuinely plausible alternative hypotheses for any phenomenon before testing, then design experiments that can exclude hypotheses rather than merely confirm your preferred one.
Use AI systems as externalized disconfirmation generators by explicitly instructing them to attack your reasoning and construct the strongest counterarguments, then engage seriously with outputs that produce genuine surprise.
Collect structured peer feedback from five diverse, independent observers using identical questions, then treat convergent signals (patterns appearing in multiple responses) as higher-confidence blind spot detections than any single observation.
Design feedback requests to focus on specific, observable behaviors rather than character judgments, and solicit them rather than waiting for spontaneous feedback, to shift attention from self-level threat to task-level calibration.
Make predictions specific enough to score objectively rather than vague enough to be unfalsifiable.
Analyze calibration separately for different domains because bias susceptibility is domain-specific rather than general.
Decompose prediction accuracy into calibration (do your probabilities match reality) and resolution (can you distinguish probable from improbable) to diagnose different types of forecasting errors.
Update beliefs incrementally in proportion to the diagnostic value of evidence rather than the emotional intensity of evidence.
Translate probabilistic reasoning into natural frequencies (counts out of a reference class) rather than abstract percentages to align with evolutionarily grounded cognitive mechanisms.
Before updating a belief, explicitly estimate both the direction and magnitude of the update in writing to prevent conservatism and base rate neglect.
Track not just prediction accuracy but also the frequency and magnitude of belief updates to detect systematic conservatism or overreaction patterns.
Build a personal bias profile that identifies which specific biases operate most strongly in which domains rather than treating bias awareness as general knowledge.
Distinguish between systematic bias (errors in a consistent direction) and noise (random scatter) because they require fundamentally different correction strategies.
Use 'consider the opposite' as a deliberate practice before finalizing judgments to counteract confirmation bias by forcing processing of disconfirming evidence.
Tag beliefs and schemas in your knowledge system with explicit confidence levels and evidence bases, then review and update these calibration tags over time.
Before significant decisions, state your confidence level explicitly along with the three strongest reasons you could be wrong to institutionalize calibrated thinking.
Separate prediction accuracy from decision quality by building contingency plans for multiple outcomes rather than assuming your most likely prediction will occur.
Record your mental and physical state alongside predictions to detect correlations between physiological/emotional states and systematic prediction errors.
When communicating information across time or people, explicitly record the context that shaped your interpretation to prevent systematic misunderstanding by future interpreters who will lack that context.
When switching between cognitive contexts, insert a deliberate transition protocol that closes the old context and loads the new one, rather than assuming instantaneous context transfer.