Articulate your values before asking AI for values-related help — use AI to argue from YOUR values, not to provide substitute values
Before using AI for values-related decisions, articulate your values and hierarchy explicitly—then use AI to generate arguments from your stated values' perspective rather than accepting AI recommendations as substitute values.
Why This Is a Rule
AI systems have implicit values embedded in their training — tendencies toward safety, balance, conventionality, and broad optimization. When you ask AI "What should I do about [values-laden decision]?" without first articulating your own values, the AI's implicit values substitute for yours. The recommendation you receive optimizes for the AI's values profile, not yours. You then adopt the recommendation, experiencing it as a values-guided decision when it was actually an AI-values-guided decision.
The fix has two parts. First, articulate your values and hierarchy before engaging the AI. Write them down: "My relevant values for this decision are A > B > C, where A means [operational definition]." This creates an anchor that prevents the AI's implicit values from displacing yours. Second, use the AI to generate arguments from your stated values' perspective: "Given that I prioritize [value A], what are the strongest arguments for each option?" This leverages the AI's analytical power while keeping your values as the decision framework.
This is AI generates inputs, you synthesize judgment — outsourcing the integration function is epistemic abdication regardless of input quality (AI generates inputs, you synthesize judgment) applied specifically to values-laden decisions, where the risk of value substitution is highest.
When This Fires
- Before asking AI for advice on career decisions, relationship choices, ethical dilemmas, or life direction
- When the decision involves trade-offs between values rather than pure optimization
- When you notice AI recommendations that feel reasonable but don't quite fit your priorities
- Complements Apply the defense test to AI conclusions: 'Could I defend this without the AI? Can I identify where it might be wrong?' (AI defense test) and AI generates inputs, you synthesize judgment — outsourcing the integration function is epistemic abdication regardless of input quality (input/judgment separation) for values specifically
Common Failure Mode
Asking AI for values conclusions: "Should I take the risky job or the stable one?" The AI produces a balanced analysis that implicitly favors safety and conventionality (common training biases). You accept it because it sounds reasonable — without realizing it deprioritized your value of creative risk-taking that would have produced a different answer.
The Protocol
(1) Before engaging AI on a values-laden decision, write your relevant values and their hierarchy for this specific decision. (2) Prompt the AI with your values as the framework: "Given that I prioritize [value A] over [value B], analyze the trade-offs between [options]." (3) Use AI to generate arguments for each option through the lens of your stated values — not to recommend which option to choose. (4) Synthesize the arguments yourself (AI generates inputs, you synthesize judgment — outsourcing the integration function is epistemic abdication regardless of input quality), applying your judgment to weigh them within your values framework. (5) The AI's role is analytical amplification; the values are yours. If the AI's recommendation contradicts your hierarchy → your hierarchy wins (Three filters for AI contradictions: unconsidered evidence? Verifiable reasoning error? Or just a different conclusion without showing work? Only the first two warrant revision, filter 3).