Audit the training data before trusting AI decisions about people
When an AI system makes consequential decisions about people (hiring, performance evaluation, resource allocation), audit what organizational context and metrics trained the system before evaluating algorithm quality, because AI inherits and amplifies the biases of the measurement system.
Why This Is a Rule
AI systems that make decisions about people — who to interview, who to promote, how to allocate resources — inherit every bias embedded in the data they were trained on. If your organization's historical hiring favored graduates from three universities, the AI will learn that pattern and reproduce it. If performance reviews systematically rated certain groups lower, the AI will encode that bias as a feature, not a bug.
The critical insight: the algorithm might be technically excellent and still produce biased outcomes. Auditing the algorithm's accuracy misses the point. The bias lives upstream — in what the organization measured, who measured it, and what context shaped those measurements. A perfectly accurate model trained on biased data produces perfectly accurate bias.
This rule mandates organizational audit before technical audit. Before asking "is the algorithm good?" ask "what organizational reality trained this algorithm?"
When This Fires
- Evaluating or deploying AI hiring tools (resume screening, interview scoring)
- Using AI-driven performance analytics that inform promotion or compensation
- Implementing AI resource allocation (project staffing, budget distribution)
- Any AI system whose output affects individual people's opportunities or evaluations
Common Failure Mode
Trusting AI decisions about people because the algorithm's accuracy metrics are high. High accuracy on historical data means the model learned to replicate historical patterns — including historical biases. An AI hiring tool that's 95% accurate at predicting "successful" hires is 95% accurate at reproducing whatever your organization historically defined as "successful," which may exclude entire categories of people who would have succeeded under different conditions.
The Protocol
Before deploying or trusting an AI system that affects people: (1) Document what data trained the system — who collected it, over what time period, under what organizational conditions. (2) Identify which organizational biases could be encoded in that data. (3) Check whether the system's outputs correlate with demographic factors they shouldn't. (4) Only after the organizational audit passes should you evaluate algorithm quality. If the training data is biased, improving algorithm accuracy makes the bias more precise, not less harmful.