The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Build self-efficacy for independent judgment through accumulated mastery experiences—small decisions where you evaluate evidence yourself and observe outcomes—rather than through intellectual agreement with the concept of self-authority.
Deliberately separate the content of a recommendation from characteristics of its source by asking whether you would find the recommendation compelling if it came from a low-status source, exposing when you are responding to peripheral cues rather than substantive merit.
Before adopting an external recommendation, apply an accountability check by asking whether you would own the decision's consequences as your own or defer blame to the recommender, using the answer to diagnose whether you have retained or surrendered authority.
Recognize authority transfer through its felt signature—dissolving objections, premature agreement, bypassed evidence evaluation, or diffused responsibility—and use these somatic markers as triggers for metacognitive intervention before compliance completes.
Deliberately introduce processing friction when consulting AI by asking the same critical questions you would ask a junior colleague—what's the source, what assumptions underlie this, what's the strongest counterargument—rather than allowing fluency and confidence to bypass evaluation.
Seek authentic disagreement from people who genuinely hold contrary views rather than assigning devil's advocate roles, because only genuine dissent triggers the broader cognitive processing that improves judgment quality.
Break unanimity in conformity situations by being the first to dissent, because a single visible example of non-conformity reduces perceived social cost for everyone observing.
Block your measured peak attention hours on your calendar as recurring non-negotiable focus time and assign your most cognitively demanding tasks to those blocks, because cognitive performance varies 7-40% across the day and high-stakes decisions made during depletion are measurably worse.
Start reclaiming authority in domains with lowest combined consequence severity and social friction to build mastery experiences before tackling high-stakes domains.
Take clear I-positions rather than we-positions or appeals to external authority to make your thinking visible and updatable.
Maintain emotional presence and contact with relational partners during disagreement rather than withdrawing, distinguishing differentiation from emotional cutoff.
Distinguish being influenced (changing mind due to persuasive argument) from being controlled (changing position due to emotional discomfort with disapproval).
When you possess domain-specific expertise relevant to a decision, voice your dissenting assessment even when it conflicts with hierarchical authority, because withholding situated knowledge creates epistemic fragility and constitutes professional negligence.
Frame professional dissent as questions rather than assertions when building credibility, because questions activate information-sharing while reducing social cost and allowing decision-makers to reach conclusions themselves.
Build authority to dissent through demonstrated competence and calibrated prediction tracking, because self-authority without proven expertise is indistinguishable from arrogance and carries no credible weight.
Replace algorithmically curated feeds with deliberately selected information sources, because algorithmic curation optimizes for engagement rather than accuracy and systematically filters input to maximize emotional reactivity.
Audit your beliefs for algorithmic origins by tracing each position to its informational source, because beliefs acquired through engineered exposure rather than deliberate inquiry have not been subjected to sovereign epistemic standards.
When using AI systems, maintain explicit separation between input generation (AI-assisted) and judgment synthesis (human-retained), because outsourcing the integration function constitutes epistemic abdication regardless of input quality.
Declare your specific intention before encountering competing demands to pre-load attentional filtering without ongoing willpower expenditure.
Conduct periodic authority audits by listing every source shaping beliefs in a domain and evaluating each against basis of trust, scope of expertise, and verification recency, because authority delegations accumulate unconsciously and drift without detection.
When large language models express 90%+ linguistic confidence, independently verify claims in consequential domains, because the gap between expressed confidence and actual accuracy represents systematic miscalibration that human trust heuristics cannot detect.
Form your own judgment before consulting AI systems, then compare rather than defer, because maintaining independent assessment as a non-negotiable baseline prevents automation complacency and preserves epistemic authority.
Calibrate your internal authority voice by recording past predictions with confidence levels, checking outcomes, and adjusting future confidence based on your actual track record rather than emotional intensity or social pressure.
When AI outputs contradict your examined analysis, evaluate them by asking whether they present unconsidered evidence or identify verifiable reasoning errors, treating model fluency as orthogonal to epistemic authority.