Sep 20 • Leo Hoar

10 Survey Design Mistakes Every UX Researcher Makes (And How to Fix Them)

Surveys are the most deceptively difficult user research method. Tools like Google Forms and Surveymonkey make surveys seem effortless. But without the right training, researchers are likely to make mistakes that they’re not even aware of, and produce data that is biased and misleading. Learning proper methodology and question construction enables researchers to avoid costly errors and use surveys to gather robust, quantified insights.

The Question That Destroyed Three Months of Research

To quote Sophia from The Golden Girls, "Picture this":

You're presenting survey findings to the C-suite. Six months of work, 2,000 respondents, and what should be your career-defining moment. Then someone points to one of your survey questions and asks: "Doesn’t this term mean different things to different customer groups?”

Time to be honest. This isn't a generic example. This happened to me. The specific term was one we used around the company all the time. I never stopped to wonder what it might mean to actual survey respondents.

What's the big deal? Well, a respondent who understands the word differently will answer differently just because of that word. So the trend we see in responses, where Group A answers one way and Group B answers another, comes from their different understanding of that word, not from any other meaningful difference.

This is just one representative example of a lot of survey mistakes: there’s nothing in the data itself, or in the survey tool you’re using, to alert you to a mistake.

The data might look perfectly coherent and consistent. It seems to tell a story.

So you only realize the error when you get a probing question, or someone with survey expertise pokes at your process a bit.

This scenario plays out in organizations everywhere. Survey design mistakes don't just affect data quality. They erode professional credibility and waste organizational resources.

Why UX Researchers Need to Know the Theory behind Survey Design

Survey methodology research informed by cognitive psychology shows that even minor question flaws create systematic bias. The Tourangeau-Rips model of survey response demonstrates how respondents progress through comprehension, retrieval, judgment, and response selection—with error opportunities at each stage.

Yet most UX researchers learn survey design through trial and error rather than evidence-based principles. The result? Preventable mistakes that experienced practitioners make repeatedly under deadline pressure.

The Six Pillars of Scientifically Sound Survey Design

Construct Validity: One Question, One Concept

The Principle: Each question must measure exactly what you intend, without conceptual drift.

The Research: Mixed constructs create uninterpretable data. Research by Krosnick (1999) shows that compound questions reduce measurement reliability by 40-60%.

Cognitive Load Management: Minimize Mental Effort

The Principle: Respondents have limited processing capacity. Exceed it, and they satisfice—taking mental shortcuts that compromise data quality.

The Research: Satisficing behavior increases dramatically with question complexity (Krosnick & Presser, 2010). Double negatives, long matrices, and abstract concepts trigger this response.

Response Scale Optimization: Balanced and Meaningful

The Principle: Scale endpoints and intervals must reflect genuine psychological distances between response options.

The Research: Schwarz & Hippler (1987) demonstrated that unbalanced scales create systematic bias toward the overrepresented pole.

Temporal Specificity: Realistic Recall Windows

The Principle: Memory accuracy degrades predictably over time, following well-established forgetting curves.

The Research: Sudman & Bradburn (1982) found recall accuracy drops 50% beyond 2-4 weeks for behavioral questions.

Linguistic Precision: Shared Understanding

The Principle: Question comprehension varies dramatically across populations. Jargon creates measurement error.

The Research: Presser & Blair (1994) showed that technical terminology increases "don't know" responses by 15-30%.

Response Category Completeness: Exhaustive and Exclusive

The Principle: Categories must cover all possible responses without overlap.

The Research: Lenzner and Menold (2016) warn that categories that are not unequivocal and mutually exclusive impair answer accuracy.

10 Survey Design Mistakes (And How to Fix Them)

These six pillars provide the theoretical foundation, but theory alone isn't enough. Even UX researchers who understand these principles intellectually still fall into predictable traps when designing surveys under pressure.

The following ten mistakes represent the most common errors I've observed across hundreds of UX research projects.


Each mistake violates one or more of the pillars above, creating specific types of bias produce misleading data in ways that aren't immediately obvious. More importantly, each has a straightforward fix once you recognize the underlying problem.

Mistake #1: Survivorship Bias in Sample Selection

Example: Sending a survey about "why users stop using our product" only to current active customers, or asking current subscribers why people cancel subscriptions.


Flaw:
You're missing the exact people whose opinions matter most—the ones who actually left. Current users can only guess why others quit, and their guesses are usually wrong. This creates survivorship bias, where you only hear from the "survivors" who stayed, not the people who experienced the real problems.


Fix:
Survey recent churned users, cancelled subscribers, or people who tried your product but didn't convert. Yes, it's harder to reach them, but their insights are often more valuable than current users' opinions.

Mistake #2: Double-Barreled Questions

Example: "Rate your satisfaction with our product's ease of use and visual design."

Flaw: You're really asking about two different things—how easy it is to use AND how it looks—but only giving people one way to respond. Someone might love how it looks but hate how it works. This is called a compound question (or double-barreled question), and it makes your data impossible to interpret clearly.

Fix: Split this into two separate questions: one about ease of use, one about visual design. Each question should measure just one concept (this is called construct validity).

Mistake #3: Recall Bias from Inconsistent Time Frames

Example: One question asks "In the past week, how often...?" and the next asks "In the past month, how often...?"


Flaw:
People have to keep switching their mental timeframe, which is confusing. This leads to more wrong answers because they mix up the time periods (researchers call this recall bias—when memory errors affect survey responses).


Fix:
Keep the same time period for related questions. If you must change it, call it out clearly.

Mistake #4: Recall Bias from Inconsistent Time Frames

Example: Starting your survey with detailed questions about security and privacy concerns, then asking about overall product satisfaction later.

Flaw: The early questions make people think about security issues they weren't considering before. This artificially lowers their satisfaction scores and creates false connections between security and satisfaction. It's like asking "Do you think clowns are scary?" right before "How do you feel about birthday parties?"

Fix: Put general questions (like overall satisfaction) before specific ones (like detailed feature ratings). Or test different question orders with different groups to see if order changes your results.

Mistake #5: Comprehension Error from Technical Jargon

Example: "How often do you use advanced analytics features?"

Flaw: Not everyone knows what "advanced analytics" means. People will guess, skip the question, or answer based on the wrong understanding. This creates comprehension error—when people misunderstand what you're asking about.


Fix: Use everyday language or explain the term: "How often do you use advanced features (like custom reports, data exports, or prediction tools)?"

Mistake #6: Unbalanced Response Scales

Example: "How valuable is this feature? Very valuable, Somewhat valuable, Neutral"

Flaw: There's no way for someone to say the feature isn't valuable at all. This pushes all your results to look more positive than they really are. This creates what's called response bias—when your answer choices influence people toward certain responses.

Fix: Provide the full range from negative to positive: "Not at all valuable, Slightly valuable, Somewhat valuable, Very valuable, Extremely valuable" (this is called a balanced scale).

Mistake #7: Order Bias in Lists and Rankings

Example: A ranking question where features are always listed as: Performance, Price, Usability, Security.

Flaw: People are more likely to pick items that appear first, just because they see them first (this is called primacy effect). This makes the first few options look more popular than they really are, creating order bias in your results.

Fix: Show the list in a different random order for each person who takes your survey (called randomization).

Mistake #8: Acquiescence Bias from Unidirectional Scales

Example: Every agreement question has "Strongly Agree" as 5 and "Strongly Disagree" as 1, or every satisfaction question has "Very Satisfied" as the highest number.

Flaw: Some people are natural "yea-sayers" who tend to pick high numbers, while others are "nay-sayers" who pick low numbers, regardless of the actual question. When all your scales go the same direction, you're measuring personality differences instead of real opinions. This is called response style bias or acquiescence bias.

Fix: Reverse the direction of some questions so that sometimes "agree" is good and sometimes "disagree" is good. This helps you spot people who aren't reading carefully and gives you cleaner data about actual attitudes.

Mistake #9: Non-Mutually Exclusive Response Categories

Example: "How many hours per week do you use the app? 0-5, 5-10, 10-15, 15+"

Flaw: If someone uses it exactly 5 hours, they don't know whether to pick "0-5" or "5-10." This creates confusion and inconsistent answers. Good response categories should be mutually exclusive (no overlap) and exhaustive (covering all possibilities).

Fix: Make ranges that don't overlap: "0-4, 5-9, 10-14, 15+"

Mistake #10: Satisficing from Oversized Matrix Questions

Example: A massive grid asking people to rate 10 different features on 8 different qualities (like usefulness, ease, design, etc.).

Flaw: People give up, start picking the same column repeatedly, or rush through without thinking. Large grids produce low-quality data because they trigger satisficing behavior—when people take mental shortcuts instead of giving thoughtful answers.

Fix: Break big grids into smaller chunks (5×5 maximum) or show different people different parts of the grid.

The UX Researcher's Survey Quality Checklist

Knowing these mistakes is one thing—consistently avoiding them under deadline pressure is another. This research-backed checklist transforms the principles and mistake patterns above into a practical quality assurance process. Use it to audit every survey before launch, ensuring your questions meet the scientific standards that produce reliable, actionable data.

The checklist is organized around the cognitive processes respondents go through when answering questions, making it easier to spot problems before they contaminate your data.

1. Construct Validity (One Question, One Concept)

  • Does this question ask about only one thing at a time?
  • If I showed this question to a colleague, would they understand it the same way I do?

2. Cognitive Load Management (Minimize Mental Effort)

  • Can someone answer this question without having to read it multiple times?
  • Are there any confusing double negatives or complex sentences?
  • Is the question short and straightforward?

3. Response Scale Optimization (Balanced and Meaningful)

  • Do my answer choices cover all the ways someone might feel or respond?
  • Are there equal numbers of positive and negative options?
  • Do the answer choices make sense and feel evenly spaced?

4. Temporal Specificity (Realistic Recall Windows)

  • Am I asking people to remember something they can actually remember accurately?
  • Are my time periods consistent across related questions (all "past week" or all "past month")?

5. Linguistic Precision (Shared Understanding)

  • Would someone outside my company understand all the words I'm using?
  • Have I explained any technical terms or given examples?
  • Is this written in everyday language?

6. Response Category Completeness (Exhaustive and Exclusive)

  • Do my answer choices cover every possible response without gaps?
  • Can someone pick exactly one answer without confusion (no overlapping ranges)?
  • Is there a clear place for every possible respondent to fit?

Why Survey Methodology Mastery Accelerates UX Careers

The stakes here extend far beyond any individual research project. Poor survey data doesn't just compromise individual projects—it erodes stakeholder confidence in research function value. Organizations invest heavily in user research expecting actionable, defensible insights.


When survey data quality is questionable, decision-makers bypass research entirely. They make product decisions based on intuition, sales feedback, or competitor analysis instead of user data.


Conversely, UX researchers who consistently deliver high-quality quantitative insights become strategic partners. They influence roadmaps, justify resource allocation, and advance into senior research leadership roles.

Level up your Survey Design Skills

Survey methodology isn't optional for UX researchers—it's a core competency that separates tactical contributors from strategic research leaders.

At UXR Institute, our Quantitative UX Research Course in Survey Design teaches the principles behind trustworthy quantitative data.

Course includes:

  • A deep dive into the concepts you need to design questions that produce accurate, reliable data
  • Live expert feedback on your survey questions
  • Hands-on work on real examples from UX research practice


Next cohort enrollment closes soon.
Limited seats available.

Created with