Limit tool candidates to 2-3 before evaluating — more options increase decision time and decrease satisfaction regardless of objective quality
Limit active tool candidates to two or three options maximum before evaluation—evaluating more options increases decision time logarithmically while decreasing satisfaction regardless of whether you choose objectively better tools.
Why This Is a Rule
Barry Schwartz's Paradox of Choice research demonstrates that more options produce worse outcomes on two dimensions simultaneously: decisions take longer (more comparisons needed) and satisfaction decreases (more alternatives generate more regret about paths not taken). Evaluating 8 note-taking apps takes 4x longer than evaluating 3, and you'll be less satisfied with your choice because you'll imagine what the other 5 offered that yours doesn't.
The 2-3 candidate limit is the sweet spot: enough options for meaningful comparison (1 option is no choice; 2 enables comparison; 3 enables triangulation) without enough to trigger choice overload (5+ candidates create decision paralysis). The limit is applied before evaluation, not after — you narrow the field to 2-3 candidates using quick filters (Define the specific job the tool must do in one sentence before evaluating — "capture ideas on phone, retrieve on laptop" beats "note-taking app"'s job definition, Pick a tool with 5 minimum requirements, select the first that meets them, commit for 90 days — satisfice, don't maximize's requirements), then evaluate only those candidates in depth.
This is Limit recurring low-stakes decisions to 3-7 options — this range matches working memory for effortless comparison's 3-7 option range applied specifically to tool selection, tightened to 2-3 because tool evaluation is particularly susceptible to feature-comparison spirals where each candidate reveals new dimensions to compare, expanding the evaluation indefinitely.
When This Fires
- After defining the job (Define the specific job the tool must do in one sentence before evaluating — "capture ideas on phone, retrieve on laptop" beats "note-taking app") and requirements (Pick a tool with 5 minimum requirements, select the first that meets them, commit for 90 days — satisfice, don't maximize), when deciding which tools to evaluate
- When tool research has been ongoing for weeks with no convergence
- When a comparison spreadsheet has 10+ columns and still doesn't produce a clear winner
- Complements Pick a tool with 5 minimum requirements, select the first that meets them, commit for 90 days — satisfice, don't maximize (satisficing) and Define the specific job the tool must do in one sentence before evaluating — "capture ideas on phone, retrieve on laptop" beats "note-taking app" (job definition) with the candidate-count constraint
Common Failure Mode
The comprehensive comparison: evaluating every tool in a category to ensure you've found "the best one." The comparison takes weeks, each new tool reveals features the others lack, and the decision feels harder with each addition rather than easier. By the time you choose, you've spent more time evaluating than you'll save in the first year of use.
The Protocol
(1) After defining your job and requirements, do a quick scan of available tools. Spend no more than 30 minutes identifying the landscape. (2) Filter to 2-3 candidates using hard requirements (Pick a tool with 5 minimum requirements, select the first that meets them, commit for 90 days — satisfice, don't maximize): which tools meet all 5 minimum requirements? If more than 3 pass, take the 3 most popular (larger community = better support). (3) Evaluate only these 2-3 candidates. Do not add more candidates during evaluation, even if someone recommends another tool. It goes on a list for the next evaluation cycle. (4) Choose among the 2-3 within your evaluation time budget (Scale tool evaluation effort to switching cost, not feature count — maximum deliberation for high lock-in, minimal for easily replaced tools). (5) Accept that you may have missed a slightly better tool by limiting to 3. The time saved by not evaluating it is worth more than the marginal improvement it might have offered.