Peaks and Pitfalls: The Most Common Survey Errors and How to Avoid Them
Course features
Level: Foundational
1 hour
Monday, November 17, 2025, 3-4pm EST US
Format: Live Online via Google Meet
Workshop Description
If you run surveys in your role, but need a refresher on the pitfalls that will send your data off a cliff, then this workshop is for you. We'll do a high-level overview of major errors at each stage of surveying, from planning, to question design, to sampling biases that undermine the generalizability of your insights. This practical workshop walks through real examples of surveys gone wrong, showing you exactly where errors creep in and how to catch them before launch. You'll leave with a battle-tested error checklist and the confidence to design surveys that actually measure what you think they're measuring.
Workshop Format
Running surveys effectively means learning theory and applying it. At the UXR Institute, we take a hands-on approach: we'll study real examples of problematic surveys, diagnose them, and talk about the best practices that will help you avoid them in your own survey practice.
How the 60-minute session unfolds:
Kickoff (5 min): Poll + shared goals
Segment 1 (20 min): Question design disasters
Segment 2 (15 min): Sampling sins and timing bias
Segment 3 (10 min): The cognitive testing gap (and common missteps)
Segment 4 (8 min): Other fatal flaws + pilot pitfalls
Wrap (2 min): Q&A + take-home checklist
What new skills will I gain from this workshop?
The ability to spot errors in question design that compromise data
An understanding of the most crucial best practices to observe in writing questions
A practical sense for sample quality vs. sample size (and how timing skews results).
A basic primer on cognitive testing
A compact, team-friendly checklist you can run in 10 minutes pre-launch.
How will this workshop help my career?
You’ll gain more methodological authority in your organization
You’ll justify tradeoffs (e.g., response rate vs. representation) using clear risk language.
You’ll cut down rework by catching flaws early—saving time, budget, and stakeholder trust.
Who is this workshop for?
UX Researchers, Research Leads, and Research-leaning PMs or Designers who field surveys
Teams who depend on survey data for product decisions, roadmap prioritization, or OKRs.
Anyone who has shipped a survey and later wondered, “Can we trust these results?”
Workshop Outline
Section 1: Survey Planning
Many UX research teams skip planning and jump straight into writing questions. Basic principles of planning, including distinguishing constructs and measurements, help avoid bad data down the line.
Section 2: Question Design Disasters
See how common wording errors quietly destroy data quality. We'll spot these traps in real examples, then explore some principles that will help you avoid these.
Section 3: Cognitive Testing
Learn why skipping cognitive testing leads to misinterpreted questions and junk data. Practice a quick think-aloud exercise to expose hidden confusion before your survey launches.
Section 4: Survey Disaster Aversion Checklist
We'll run through a final checklist that you can use with your teams that will increase the chances that your surveys gather usable, accurate data.
Leo Hoar, PhD
Bio
Leo founded the UXR Institute because he loves seeing other researchers grow and thrive. He draws on nearly ten years doing UX research and building research teams, as well as a previous life teaching at universities and training new teachers.
While advising and working at startups, Leo learned how to balance the need for rigor with the need for speed and flexibility. He crafted this workshop to make it easier for researchers and people who do research to execute reliable surveys under real-world conditions.
Free Advisory Session
Have questions about the workshop? Want to chat about your learning goals to see if they align with the course approach? Book a free call with the instructor.
Learning Outcomes
By workshop completion, you will confidently:
Diagnose problematic questions on sight, before they wreck your data
Detect unbalanced scales, overlapping ranges, and missing/forced options that bias data.
Spot sampling, coverage, timing, and non-response bias before they distort findings.
Understand what cognitive testing is and why it is important
Apply a concise Error Prevention Checklist to de-risk launches under time pressure.