When AI suggests a framework, ask for sources, boundary conditions, and counterarguments — not yes/no
When AI assistants suggest frameworks or schemas, respond by asking for original research sources, boundary conditions, and strongest counterarguments rather than accepting or rejecting the claim directly.
Why This Is a Rule
AI assistants generate fluent, confident-sounding frameworks that feel authoritative but may be synthesized from training data without rigorous foundation. Accepting or rejecting the framework directly — "yes, that makes sense" or "no, that's wrong" — both skip the verification step. Acceptance imports potentially unfounded schemas. Rejection discards potentially valuable insights without examination.
The three-part questioning protocol treats AI output as hypothesis rather than conclusion: Original research sources (what evidence supports this? can I verify the foundational claims?), Boundary conditions (where does this framework fail? under what conditions does it not apply?), and Strongest counterarguments (what's the best case that this framework is wrong?).
This converts the AI from an authority (whose output you accept or reject) into a research assistant (whose hypotheses you investigate). The quality of the AI's response to these three questions also serves as a credibility signal: vague answers suggest the framework was synthesized rather than grounded.
When This Fires
- When an AI assistant proposes a framework, model, or principle
- When AI output sounds authoritative and you're tempted to adopt it directly
- During AI-assisted research when the AI's organizational framework feels compelling
- Any interaction where AI is generating schemas rather than executing instructions
Common Failure Mode
Accepting AI frameworks because they're well-articulated: "That's a really clear framework, I'll use it." Fluency is not evidence of accuracy. AI produces fluent, structured, coherent frameworks regardless of evidential foundation. The three-question protocol tests the foundation behind the fluency.
The Protocol
When AI suggests a framework: (1) Ask: "What original research supports this framework? Can you cite specific studies?" (2) Ask: "What are this framework's boundary conditions? Where does it fail or not apply?" (3) Ask: "What's the strongest argument against this framework?" (4) Evaluate the responses: strong sources + clear boundaries + genuine counterarguments → the framework has a real foundation. Vague sources + no boundaries + no counterarguments → the framework was synthesized without grounding. Treat it as a hypothesis to verify, not a conclusion to adopt.