Apply the defense test to AI conclusions: 'Could I defend this without the AI? Can I identify where it might be wrong?'
Before acting on AI-generated conclusions, apply the defense test: 'Could I defend this conclusion without the AI's output? Do I understand the reasoning well enough to identify where it might be wrong?'—if not, do the cognitive work before proceeding.
Why This Is a Rule
AI-generated conclusions arrive with a dangerous property: they sound authoritative regardless of accuracy. Unlike a human advisor whose hesitation, qualifications, and uncertainty cues signal confidence level, AI output maintains uniform confidence whether the conclusion is well-founded or hallucinated. This makes the human's own understanding the last line of epistemic defense.
The defense test has two parts, each catching a different failure: "Could I defend this conclusion without the AI?" tests whether you independently understand the reasoning. If you couldn't arrive at or justify the conclusion on your own, you're not using AI as an amplifier — you're using it as an oracle. Oracle-mode usage is appropriate for low-stakes tasks (Scale AI output verification to stakes: skim for brainstorming, spot-check for communications, verify every claim for publication) but dangerous for consequential decisions because you can't evaluate what you don't understand. "Can I identify where it might be wrong?" tests whether you understand the conclusion's vulnerability surface. Every conclusion has weak points — assumptions that might not hold, data that might be missing, reasoning steps that might be flawed. If you can't identify these, you can't meaningfully verify the conclusion.
Failing either part means: the AI's conclusion may be right, but you can't tell whether it is. The appropriate response is to do the cognitive work — understand the reasoning, identify the weaknesses — before acting.
When This Fires
- Before acting on any AI-generated conclusion for a consequential decision
- When AI output "feels right" but you couldn't reconstruct the reasoning independently
- When using AI analysis for decisions that affect others, commit resources, or are irreversible
- Complements Scale AI output verification to stakes: skim for brainstorming, spot-check for communications, verify every claim for publication (stakes-based verification) with the epistemic independence test
Common Failure Mode
Treating AI as oracle: "The AI says we should do X → let's do X." If you can't explain why X is right, identify the assumptions X rests on, and articulate where X might fail, you're outsourcing judgment rather than augmenting it. AI should help you think better, not replace thinking.
The Protocol
(1) When AI generates a conclusion you're about to act on, pause. (2) Part 1 — Defense test: "Could I defend this conclusion to a skeptic without referencing the AI? Do I understand the reasoning chain well enough to explain it?" If no → the AI understood something you don't. Before acting, trace the reasoning yourself until you can defend it. (3) Part 2 — Vulnerability test: "Can I identify at least two specific ways this conclusion might be wrong — assumptions it relies on, data it might be missing, edge cases where it fails?" If no → you don't understand the conclusion's limits. Before acting, probe the weaknesses until you can articulate them. (4) If both tests pass → act. You understand and can evaluate the AI's contribution. (5) If either test fails → do the cognitive work before proceeding. The cost of understanding is always less than the cost of acting on a conclusion you can't evaluate.