The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Insert mandatory pause points in checklist execution where all activity stops and verification becomes the primary task, rather than running checklists as background processes.
For each checklist item, require a physical action (touching the control) or verbal callout (saying the condition aloud) to prevent autopilot execution where the ritual completes without actual verification.
For any pre-flight check item you can complete in under 5 seconds without pausing, treat that as evidence of ritual execution not genuine verification—pause and locate observable evidence before marking complete.
Design verification as three independent layers—continuous signals (daily metrics), periodic samples (weekly/monthly spot-checks), and infrequent structural audits (quarterly full reviews)—with each layer optimized for different failure detection at different resource costs.
Make verification checkpoints transparent to delegates by communicating what you will check, when you will check it, and what standards apply, because hidden monitoring functions as surveillance while transparent verification functions as professional collaboration.
When a verification signal degrades, escalate to deeper sampling; when sampling reveals a pattern, trigger a structural audit; when an audit reveals a structural problem, modify the delegation itself—this creates a cascading response protocol where each verification layer can activate the next.
Extend verification intervals when a delegate produces consistent quality outputs over time, and tighten intervals when errors surface, treating trust as a dynamic variable calibrated by accumulated evidence rather than a fixed initial condition.
For high-stakes AI outputs, adopt a three-tier verification intensity: skim for low-stakes brainstorming, spot-check key claims for medium-stakes communications, and verify every substantive claim for high-stakes published or production content.
Before acting on AI-generated conclusions, apply the defense test: 'Could I defend this conclusion without the AI's output? Do I understand the reasoning well enough to identify where it might be wrong?'—if not, do the cognitive work before proceeding.
After encountering AI recommendations that contradict your careful analysis, apply three filters in sequence: Does this present unconsidered evidence? Does this identify verifiable reasoning errors? Does it merely state a different conclusion without showing work? Only the first two warrant revision.