Home > UX Research Courses > User Interviews for Strategic Insight Course
Qualitative UX Research Course

User Interviews for Strategic Insight

Design and Moderate Interviews that Drive Real Decisions

Zero Risk Enrollment: Receive a full refund through the end of the first class day.
Course features
Next Cohort Dates
June 8-22, 2026
Meeting Time
Mondays, 12-2pm ET US
Course Type
Live Online
Duration
3 weeks
Price
$295
Credential
Certificate of Completion

Build the User Interview Skills That Influence Product Decisions

You already run good user interviews. You've built rapport with strangers, recovered from technical disasters mid-session, and pulled helpful insights out of hours of transcripts.

This course is about a specific upgrade: designing and conducting interviews that directly drive strategic decisions.

What's the difference? For one thing, strategic decisions require insights that tell stakeholders something new and non-obvious. These come from rich, detailed conversations.

To get there, your moderation guides need to move from being lists of topics to being instruments for testing the assumptions a decision actually rests on. Your interviews push past what users’ surface beliefs and opinions into the mental models underneath, where strategic insight actually lives.

Across three sessions, you will develop and refine a moderator's guide using a shared product scenario. The course is hands-on throughout, combining structured critique, probing practice, live interview simulations in groups of three, and a real pilot interview between sessions.

The course deliverable is a moderator's guide template that will make sure your future interviews are tightly connected to the strategic decisions they inform.

Why Learn to Run Strategic User Interviews

As they advance in their careers, UX researchers are increasingly asked to inform decisions that go beyond design optimization. Whether to build, who to build for, what bet to place next quarter: these are the questions stakeholders bring to research now, and user interviews are still the most flexible tool for answering them.

The interview techniques that work for usability and concept feedback don't automatically scale up to strategic questions. Three things have to shift:

  • The design has to tighten: every question needs to trace back to an assumption the team is actually weighing, not just a topic the team is curious about.
  • The moderation has to be more skillful: it's no longer just about writing good questions; it's about guiding the whole conversation in a way that avoids bias and creates conditions for participants to give rich, detailed answers.
  • They have to go deeper: strategic insight rarely lives at the level of users' opinions and reactions. It lives in their mental models, in the structures of belief and reasoning underneath the surface.

When you complete all three of these shifts, interviews become reliable and precise enough to settle product questions. Stakeholders bring you the decision they're stuck on. You bring back evidence about the assumption that's actually load-bearing. The conversation in the room shifts from "what did users say?" to "what should we do?", and you're the one with the answer.

Course Format: Intensive Live Practice With Expert Coaching 

This course is structured to reflect how user interviews actually get sharper: through doing, getting feedback, and revising in real time.

You will work with a shared product scenario, Atlas (a fictional travel concierge app), that gives every cohort the same context to design interviews around. This shared grounding lets the cohort compare guides, pressure-test questions across different framings, and learn from how other researchers approached the same product decisions.

Week 1: Build the interview (hands-on design)
Week 2: Practice and improve (hands-on moderation)
Week 3: Diagnose and strengthen (applied insight work)

Each session combines short instruction, hands-on workshops, peer critique rounds, and live moderation practice. Between Weeks 2 and 3, you will conduct a real pilot interview using your refined guide.

By the end, you will have a refined moderator's guide template built upon a practical framework for connecting interview work to the decisions your stakeholders are weighing.

Skills You'll Learn in This User Interview Training Course

Design interviews tied to product decisions

  • Translate product decisions and the assumptions behind them into testable interview constructs
  • Draft interview guides that target trust, perceived value, barriers, and decision-making, rather than topical Q&A
  • Anticipate threats to validity including leading questions, hypothetical bias, and ambiguity before they reach a participant

Moderate interviews with rigor and adaptability

  • Probe to surface mental models, tradeoffs, and decision drivers under live conditions
  • Adjust direction in real time without sacrificing neutrality or rigor
  • Recognize signals (hesitation, contradiction, drift) and follow them productively

Evaluate, refine, and translate findings

  • Critique your own and others' interview questions using structured appraisal techniques
  • Diagnose what worked and what didn't in a real interview and revise your guide accordingly
  • Translate interview findings into decision-relevant insight your team can act on

How This Course Will Help Your UX Research Career

Lead research that drives product decisions

You'll be able to design and conduct user interviews that connect directly to the decisions your team is weighing, not just produce findings about what users said. That shift, from reporting back to making decisions easier to make, is what separates strategic researchers from procedural ones in the eyes of senior stakeholders.

Demonstrate applied interview expertise

You'll leave with hands-on experience designing a guide, pressure-testing it with peers, and conducting a real pilot interview. That's work you can speak to concretely in interviews and on a portfolio. Interview skill is notoriously hard to evidence; this course gives you something specific to point to.

Strengthen credibility across product and design

You'll be able to communicate interview findings as decision-relevant insight rather than verbatim summaries, and to defend the design choices behind your guide when a PM or designer pushes back. That kind of clarity changes how your work is treated, and how often you're invited into earlier-stage product conversations.

Who is this user interviews course for?

  • An early- to mid-career researcher who has run interviews and wants to move beyond following a script
  • A researcher whose interview findings tend to produce summaries when they should produce arguments
  • A practitioner whose stakeholders are asking for sharper recommendations and better-defended interpretations

Angela Orlando, PhD

Bio
Dr. Angela Orlando is a cultural anthropologist and senior user experience researcher specializing in ethnographic methods and in-depth interviewing. She brings more than 15 years of industry experience conducting high-stakes qualitative research for leading technology organizations, including recent work with Google and ServiceNow.

Angela began her career as a journalist and spent years as a professor of anthropology, where she trained students in qualitative research design, field methods, and analytical writing. She is known for translating rigorous research practice into clear, teachable frameworks and for coaching researchers to develop confident, real-world interviewing skills.

In this course, Angela draws on her experience running hundreds of in-depth interviews across industries and cultures to help researchers move from “good questions” to truly strategic, decision-shaping conversations.

Learning Outcomes

By course completion, you will confidently:
  • Design interview guides that target product assumptions and decision needs, with constructs like trust, value, and risk (rather than topics) as the unit of question design.
  • Moderate live interviews with rigor and adaptability, probing for mental models and decision drivers while staying neutral and following meaningful signals.
  • Evaluate interview quality using structured critique techniques, identifying threats to validity such as leading questions, hypothetical bias, and ambiguity before they affect data.
  • Synthesize early themes and insight patterns from live sessions while the details are still fresh
Overview

Course Syllabus

Week 1: Designing the Interview

This session moves directly into building. You'll work with the shared Atlas scenario to identify the product decisions in play, surface the assumptions behind them, and translate those assumptions into interview constructs that can be tested in conversation

In class:


  • Icebreaker: Pair discussion on a recent moment you didn't trust a new app or tool, and what shaped that decision
  • Scenario introduction: Atlas product context, business assumptions, and the core research objective
  • Workshop (small groups): Identify the key product decisions, the assumptions behind them, and what must be learned to reduce risk
  • Hands-on exercise: Draft interview questions aligned to constructs (trust, usefulness, risk) rather than topics
  • Peer critique using "Keep / Kill / Fix": Structured rounds where participants critique each other's questions


Students leave with:

1. A rough moderator's guide for the Atlas scenario

2. A clear translation from product decisions, to assumptions, to testable interview constructs


Homework: Draft a complete moderator's guide for Atlas, ready to test and refine in Week 2..

Week 2: Hands-On Moderation

This session is where the guide gets sharper through pressure-testing and live practice. You'll critique each other's guides, revise on the spot, and run mock interviews in triads, rotating through moderator, participant, and observer roles.

In class:


  • Rapid critique rounds: Small groups review guides for bias, clarity, flow, and alignment to research goals
  • Live revision: Participants revise their guides in response to feedback
  • Triad mock interviews: One person moderates, one plays participant, one observes and notes. Rotate through all three roles.
  • Targeted observer feedback: Did the moderator probe effectively? Stay neutral? Follow signals?


Students leave with:


1. A revised moderator's guide that's been pressure-tested by peers

2. Direct moderation practice in a structured, feedback-rich setting

3. Specific feedback on probing, neutrality, and signal-following


Homework: Conduct a real pilot interview using your refined guide. Recruit anyone who fits a broad version of the target user, such as a frequent traveler or someone who uses apps to plan or decide. Note what surprised you and what didn't work.

Week 3: Diagnose and Strengthen

This final session connects interview practice to insight. You'll share what happened in your pilot, diagnose what worked and what didn't, and translate the experience into something a stakeholder could act on.


In class:


  • Pilot debrief: Share learnings about where the guide worked, where participants struggled, and where opportunities were missed
  • Hands-on synthesis: Each participant identifies one real insight, one change they'd make to their guide, and one risk or assumption the interview revealed
  • Peer feedback on insight: Group critique focused on clarity and actionability. Does the insight enable a decision, or just summarize what was said?


Students leave with:


1. A refined, real-world-tested moderator's guide

2. Real moderation experience and the learning that comes with it

3. A clear, decision-relevant insight from a real interview

More UX Research Courses

What do you want to learn next? Browse our complete catalog.
Created with