Write your explanation before asking AI — generation produces learning
When using AI for learning, write your own explanation first, then use AI interrogation to find gaps, then revise—never let AI write the initial explanation because reading AI output does not produce the generation effect.
Why This Is a Rule
The generation effect (Slamecka & Graf, 1978) establishes that information you actively produce is encoded more deeply than information you passively receive. Reading an AI-generated explanation of a concept feels like learning — the text is clear, the structure is logical, you nod along. But the cognitive work that produces durable understanding never happened. You consumed a finished product instead of constructing understanding through the struggle of articulation.
This rule enforces a three-step sequence: write → interrogate → revise. You write your explanation first, which forces you to discover what you actually understand and where your knowledge breaks down. Then you use AI to interrogate your explanation — finding gaps, errors, and missing connections. Then you revise, incorporating what the interrogation revealed. Each step is a generation event.
The critical constraint: AI never writes first. The moment AI produces the initial explanation, you've lost the generative step that produces learning. Everything after is editing, not generating.
When This Fires
- Learning a new concept and wanting to use AI to accelerate understanding
- Studying for an exam, certification, or technical interview
- Onboarding to a new codebase, domain, or role
- Any situation where you're tempted to ask AI "explain X to me" before writing your own understanding
Common Failure Mode
Asking AI to explain a concept, reading the explanation, and feeling like you understand it. You can follow the logic. You can nod at each step. But try to explain it to someone else an hour later and the structure collapses. The fluency of AI output creates an illusion of understanding — what psychologists call the "fluency illusion." The text was easy to read, so your brain concludes the concept is easy to understand. It isn't — you just consumed someone else's generation.
The Protocol
(1) Write your explanation of the concept from memory — even if it's rough, incomplete, or wrong. The struggle is the point. (2) Paste your explanation to AI with: "What did I get wrong, what did I miss, and what connections am I failing to see?" (3) Read the AI feedback and revise your explanation yourself — don't copy the AI's wording. (4) Repeat until you can explain the concept without looking at anything. Each cycle deepens encoding.