Quantitative UX Research Workshop

Statistical Significance Testing for Survey Data

Course features
  • Level: Intermediate
  • 90 minutes
  • Tuesday, November 18, 12:30-2pm EST US
  • Format: Live Online via Google Meet

Workshop Description

In this focused, hands-on session, you’ll learn how to apply statistical significance tests—specifically t-tests and ANOVA—to survey data to validate product decisions and improve UX with confidence. These tests help you determine whether differences you see in ratings or metrics are real effects or random noise. For example:

  • T-tests: Compare two groups (e.g., whether users prefer Product Concept A vs. B, or whether premium users rate a feature differently than free users).
  • ANOVA: Compare three or more groups (e.g., satisfaction across pricing tiers, customer segments, or feature bundles) without inflating error rates from running multiple t-tests.

Using these statistical tests transforms your research from opinion into evidence, making it easier to defend design decisions, prioritize user experience improvements, and secure resources for the changes that truly matter. While the workshop centers on survey and rating data (the bread-and-butter of UX research), you'll also see how the same logic applies to usability test metrics, task completion times, and behavioral data.

Workshop Format

Most researchers learn statistics through abstract theory with little connection to day-to-day work. At the UXR Institute, we take a different approach: apply significance tests directly to realistic survey datasets that mirror common product research scenarios. You’ll move from understanding to doing in minutes.

You’ll follow a practical sequence:


  • Import and prepare survey data for testing.
  • Run a t-test to compare two product options or user cohorts.
  • Extend to ANOVA to test three or more product variations or segments.
  • Interpret p-values and confidence levels, then visualize and communicate what the results mean for product and UX decisions.


How the live format helps: You’ll work through guided exercises using shared datasets, making calls a product team would actually face—like whether to prioritize a feature, rework an onboarding flow, or adjust pricing tiers. This applied format prevents common mistakes (e.g., misusing multiple t-tests) and emphasizes decision quality rather than math for its own sake.


Tools you’ll need:
Google Sheets or Excel (templates provided). No coding required.


For ANOVA, download XLMinder Analysis Toolpak for Google Sheets or Excel


What new skills will I gain from this workshop?

You’ll gain practical, job-ready skills for quantitative validation:

  • Make sense of differences in survey ratings (satisfaction, feature preference, willingness-to-pay).
  • Choose the appropriate test for two groups (t-test) vs. three or more groups (ANOVA).
  • Detect meaningful differences across pricing tiers, customer segments, or feature bundles.
  • Translate statistical outputs into defensible recommendations for UX and product stakeholders.
  • Adapt the same methods to behavioral/experimental data (activation rates, task times, engagement).

T-tests

T-tests determine whether there's a statistically significant difference between two groups or conditions, making them essential for A/B testing and concept validation. Use them when comparing user performance, satisfaction, or behavior between two design options or user segments. They give you the confidence to say "Version A is genuinely better than Version B" rather than relying on gut feelings or small sample observations.

ANOVA (Analysis of Variance)

ANOVA extends t-test logic to compare three or more groups simultaneously, perfect for testing multiple design variations or user segments at once. Use it when you have several concepts to evaluate or want to understand how different user types respond to your product. It prevents the statistical errors that occur when running multiple t-tests and reveals which variations truly perform differently.

How will this workshop help my career?

Validate product decisions with authority: Prove when one feature or product direction significantly outperforms another, identify which customer segments show the strongest signals for specific capabilities, and confidently recommend holding steady when differences aren't statistically meaningful (saving your team from building features that won't move the needle).

Increase research impact: Move from "users seemed to prefer this" to "users significantly preferred this at 95% confidence," making your insights harder to dismiss and easier to act on in roadmap planning and investment decisions.

Drive product strategy: Support feature prioritization with evidence of what significantly impacts satisfaction and retention, validate market segmentation strategies (enterprise vs. SMB, power users vs. casual), and contribute data-driven input to pricing, packaging, and go-to-market decisions.

Build credibility with stakeholders: Back your recommendations with statistical rigor instead of subjective interpretation, making it easier to secure buy-in and resources for product investments, new initiatives, and strategic pivots.

Who is this workshop for?

  • UX researchers who run or analyze surveys and need to verify findings statistically.
  • Product managers who make roadmap, segmentation, and pricing decisions informed by data.
  • Qualitative researchers expanding into mixed methods and seeking quantitative confidence. 


Prerequisites: None. Familiarity with surveys is helpful; statistics background not required.

Cheryl Abellanoza, PhD

Bio
Cheryl is the Associate Director of User Experience Research at Verizon Connect, where her team integrates quantitative methods into their mixed methods approaches. She is passionate about using quantitative methods that come from behavioral research, particularly with a cognitive psychology lens.

Learning Outcomes

By course completion, you will confidently:
  • Identify when to apply t-tests vs. ANOVA for statistical significance testing on survey data.
  • Prepare and structure survey datasets correctly for valid analysis.
  • Run both tests in Google Sheets or Excel using provided templates.
  • Interpret p-values, confidence levels, and group differences to decide if effects are real.
  • Communicate results clearly so product teams can act (prioritize, iterate, or hold steady).
Overview

Workshop Outline

Part 1: Running T-Tests on Survey Data

Learn to determine whether differences between two groups are real or just noise. You'll compare satisfaction or preference scores for two product options, or contrast how premium vs. free users rate a key feature. Through hands-on practice, you'll prepare the dataset, run the t-test in Sheets/Excel, interpret p-values and confidence levels, and craft a one-slide summary that makes a clear recommendation—like whether to prioritize Concept A or stick with the current approach if the difference isn't significant.

Part 2: Applying ANOVA to Multiple Product Variations

Discover how to evaluate differences across three or more groups without inflating error rates. You'll test satisfaction across pricing tiers, customer segments (enterprise, SMB, startup), or feature bundles to identify which groups drive the effect. In this session, you'll run ANOVA, visualize group means with error bars.
Created with