Nov 28 • Leo Hoar, PhD

AI for UX Research: How to Do Faster Qualitative Analysis Without Sacrificing Rigor

TL;DR: Learn how to leverage AI for UX research so that you gain efficiency without sacrificing quality. The focus of this post is providing an overall approach to using AI for qualitative analysis, being mindful of the strengths and weaknesses of LLM tools.
If you're a UX researcher, you've probably been there: stakeholders want insights yesterday, your interview recordings are piling up, and the thought of re-watching each session feels like a fate worse than death.

AI tools seem to promise liberation from this misery. They claim to offer lightning-fast analysis that can free you from the transcription-and-coding treadmill.

But AI isn't a magic wand you wave at your data to conjure insights. And if you're thinking about taking an AI for UX research course hoping to hand off your analysis to an AI tool, this post will demonstrate how that may not be the best idea. 

Here's our thesis on using AI for qualitative analysis: an iterative approach that prompts AI tools to perform specific analytical tasks, especially ones like inductive coding that AI tools are adept at, will not only speed up analysis, but could even improve its depth and quality.

The principle we will preach in our courses and other content is to avoid letting AI tools make "the leap" from data to finding. That's where the problems typically start.

The Point of Using AI in UX Research

Replacing human researchers with AI tools is both impractical and undesirable.

Let's start with undesirable, 'cause that's easy: If we succeeded in this, we'd. be 100% replaceable.

Luckily, the impracticality makes this unlikely. Using AI for the crucial analytic phase of a project removes the entire point of doing qualitative research, which is to represent the depth and nuance of lived human experience.

When contemplating using AI for UX research, if we aim to replace the human researcher, we're removing access to the "lived" component. This encompasses many things that are essential to finding insight:

  • the holistic understanding of context
  • the ability to empathize and exercise embodied understanding
  • the creative, abductive interpretation of qualitative data


(Nerdy digression: philosophers use the term "qualia" to describe the essence of "what it's like" to experience something. So I like to think of "qual" as investigating "what it's like" to be something, to do something, etc.)

The point of using AI is to liberate the human researcher to be able to spend more time doing the human work by alleviating the purely analytical work that the AI can do as well, if not better.

When Using AI in UX Research, Put Methodology First

The guiding principle we'll follow in this post, and in our course on AI for qualitative analysis, is simple: make choices on when and how to deploy AI tools guided by good methodology.

This means being conscious of every task or step you'd take in the ideal analysis so that you can decide, given your goals and constraints, what AI should do and what the human researcher needs to do.

When you don't do this, your work can fall victim to some of the greatest weaknesses of AI tools, which we'll discuss below, and you risk producing insights with little or no foundation in your data.

AI UX Research Tools and "The Leap"

If you ask an AI tool to analyze some transcripts you've uploaded, alongside some context about your project, they are most likely to produce a set of themes that will look extremely convincing.

"Wow!" you'll think, this just saved me hours of work.

But when you look closely at the output, you'll start to notice problems. The biggest one is that the tool will make conclusions that are unsupported by the data.

As we'll explore below, LLMs are trained to (1) perform "abstractive summarization" on enormous datasets, in order to (2) be immediately helpful to the user. So it is a deep tendency of the tool to move from raw textual data to a summary.

Unfortunately, many AI-supported research tools have "the leap" baked into their process. Many of them will produce summaries insolicited.

Real Thematic Analysis is a Multi-Step Process

The qualitative analysis method we practice most often in UX research is thematic analysis, and it has its origins in academic research. Of course, we often practice a much-abbreviated version. But the methods are very similar. The process has the following steps:

  1. Transcription
  2. Data cleaning
  3. Coding
  4. Memo writing
  5. Theme generation
  6. Report writing


Here's the weird paradox: even if you haven't been this rigorous about your qualitative analysis to date, understanding these steps now will actually enable you to harness AI tools more effectively. You will slow down in order to be able to speed up.

The Strategic Partner Model: How to Harness AI Strengths and Mitigate Weaknesses

What I call the "Strategic Partner Model" is really nothing more than applying the same standards of collaboration you'd apply to your human partners to your AI partner.

It boils down to this: When anyone else participates in your analysis, whether they are a peer or more experienced researcher, you would never let them have free reign with your data, and just formulate conclusions based on their first exposure.

You'd create a sequence of steps and provide some kind of guidelines.

Moreover, you'd assign tasks with attention to the strengths and weaknesses of each helper. A junior researcher with very little coding experience might practice doing in vivo codes, but they're probably not the best candidate to be drawing conclusive themes from the data.

Strategic Partner Principle #1: Plan Iterative Steps

All UX professionals are exposed to the power of iteration in design. Simply by doing one draft, stepping back, and returning to revise, we improve the quality of that design. If we build in pauses for feedback, even better.

Breaking qualitative analysis into discrete steps is beneficial for the same reason. But it's even more essential when building in AI tools because it enables you to match the AI's tasks to its strengths, match the human researcher's tasks to their strengths, and take steps to remedy the weaknesses of both.

At each task, you can decide:

  • Who leads this, me or an AI tool?
  • If it's an AI tool, which one do I use?
  • If it's an AI tool, what role do I play?
  • If I'm leading this task, how can the AI tool enhance or speed up my work?

Strategic Partner Principle #2: Assign AI Tasks Based on Understanding of Strengths and Weaknesses

Just as you would intuitively do with an all-human team, assign tasks with full consciousness of who excels at what, and who is likely to struggle or produce unfounded insights.

Remember that this is not only about evaluating the weaknesses of the AI tool(s). If you've done this for a while, you know that human researchers can introduce even worse biases into the process of analysis.

So this process of building a workflow where tasks are assigned to the best candidates can also help remedy the weaknesses of human researchers. For example, AI will never get fatigued from coding and start to miss important insights, nor will it see the whole project through the lens of the most recent interview (or the one that stood out the most).

Strategic Partner Principle #3: Build in Overlap, Redundancy, and Loops

This final principle both provides a safeguard against the AI's tendencies and also enriches your qualitative analysis.

Qualitative analysis relies on numerous acts of interpretation. Each researcher (including the AI tools) brings a lens through which they see the data in front of them. For the AI, their training process and the training data help constitute this lens.

Rather than take anyone's output at face value, build in moments to interrogate it, challenge it, and revise it.

AI chatbots are particularly good for this precisely because of the chatbot format. For example, you can use the AI to check or challenge its own work with a prompt such as "Take this theme that you surfaced '[theme]' and find conflicting examples within the transcripts."

Understanding LLM Strengths and Weaknesses

Grounding ourselves in what LLMs do well, and understanding their weak points, enables us to assign analytical tasks that hook into strengths, and either avoid its weaknesses or build in redundancies so that we don't unthinkingly rely on them.

Where AI Excels in Qualitative Analysis

Large language models are pattern-recognition machines trained on massive datasets. This gives them some genuine superpowers for UX research:

  • Identifying themes inductively: LLMs excel at pattern detection across large datasets, making them effective at uncovering themes from qualitative data. Their architecture allows them to process relationships between words and concepts, which aligns well with the work of identifying codes or themes.
  • Uncovering nuanced subthemes: Research shows that LLMs can successfully identify niche topics and specific nuances that human coders sometimes miss, particularly when dealing with large volumes of text.
  • Deductive coding support: Multiple studies demonstrate that LLMs perform consistently—often more consistently than human coders—when applying predefined coding schemes, achieving substantial agreement levels (around 80%) with human researchers. To do this, they do need extensive, detailed instruction.
  • Processing volume: They can handle massive amounts of text without fatigue, enabling analysis of datasets that would be impractical for manual coding alone.

Where AI Can Compromise Qualitative Analysis

Perhaps the biggest weakness of LLMs for UX research: they are also generalizing machines. And analysis is not the same as summarization. The same training that makes LLMs effective at certain tasks creates serious problems for qualitative research:

  • Overgeneralization bias: A 2025 Royal Society study testing 10 prominent LLMs found they overgeneralized in 26-73% of cases when summarizing scientific texts. In direct comparison with human-authored summaries, LLM summaries were nearly five times more likely to contain overgeneralizations.
  • Loss of participant voice: LLMs perform abstractive summarization, rephrasing content in their own words, which can erase the participant's authentic voice and the subtle language choices that often contain crucial insights.
  • Difficulty with divergent themes: Research indicates LLMs tend toward convergence, smoothing over tensions and conflicts within themes. They struggle to capture the complexity and ambiguity that characterizes rich qualitative data.
  • Missing real-world context: LLMs lack understanding of organizational context, domain-specific knowledge not documented online, and the situated expertise that researchers bring. Studies show they rely on memorization and pattern matching rather than genuine conceptual understanding.
  • Hallucination and overconfidence: LLMs are trained to provide helpful responses, which usually means erring on the side of certainty. They will provide certain answers even when information is absent or uncertain, a tendency called "sycophancy" that can spell disaster in a research context.

How to Use AI for UX Research: Capitalize on Strengths, Anticipate Weaknesses

Understanding how LLMs work helps us identify precisely where we can feel safe deploying them, and where we need to be more cautious. Based on our understanding of both qualitative analysis and how LLMs work, here are some general guidelines for using AI for qualitative analysis:

1. Use AI liberally to assist with inductive coding

LLMs are built to recognize patterns in massive amounts of textual data. This enables them to excel at identifying codes that human researchers miss. Use the LLM as a partner whenever you decide to code inductively (develop codes organically out of the interview data).

With one crucial caveat:

LLMs will never have anything close to your understanding of the business context. Never rely on it to produce all of your codes.

2. AI can apply deductive codes across a huge dataset, but requires detailed instruction and oversight

The prospect of applying a set of pre-given codes to hours of interview footage can be daunting, if not downright discouraging.

LLMs can help alleviate this work, but not eliminate it. This kind of task does not play to their strength. They are built to detect patterns and to summarize in their own words. Deductive coding is essentially the opposite of this: taking a given concept and finding instances of it.

To apply deductive codes effectively, LLMs need to be given a codebook with detailed positive examples (what fits the code) and negative examples (what does not).

When relying on an AI tool for this analytic task, it should be tested extensively and should supplement, not replace, the work of the human researcher.

3. Human Researchers Own the "So What"

LLMs will willingly make sweeping judgments about the overall significance of the data you have asked them to analyze. But it will be impossible to tell how much of this comes from your data vs. the data the LLM was trained on.

In addition, LLMs only have access to what is written. The context of your business entails so much that could never be written down.

For these reasons, researchers should always own the final "so what" statement. They may use AI tools to suggest possibilities.

Example Workflows of UX Researcher + AI Partnership

1. Human-first induction → AI deduction

You immerse yourself in a subset of interviews and generate initial codes. You then create a codebook with detailed definitions and examples, feed it to the AI, and ask it to apply them more broadly. This preserves your interpretive authority while letting AI scale execution.

2. AI-first induction → Human deduction

AI scans a large corpus and proposes candidate themes. You then return to the data to validate, refine, split, or discard them.

This works well under tight timelines—as long as you treat AI’s output as a hypothesis, not a conclusion.

3. Parallel induction → Structured comparison

You and AI independently code the same data. You then compare overlap, divergence, and blind spots.

This is one of the fastest ways to learn where AI is trustworthy and why it fails.

4. Parallel deduction → Boundary-testing

You both apply a predefined codebook, then look closely at disagreements.

Disagreements aren’t errors—they’re signals that your constructs need tightening or your assumptions need revisiting.

How to Begin Integrating AI into Qualitative Analysis

If you're ready to incorporate AI for UX research effectively:

  1. Ground yourself in qualitative method first: Understand coding, thematic analysis, and the difference between summarization and insight generation
  2. Map your workflow: Decide what steps will be done by whom, and in what order.
  3. Evaluate intercoder reliability: Conduct the same testing you would when bringing on human helpers to do analysis. Have both you and AI code the same interview, then compare. Where do you agree? Where do you differ? Why?
  4. Design for analysis: Structure your discussion guides, notes, and timestamps with efficient analysis in mind
  5. Maintain control of strategy: Your brain contains the approach; AI carries it out

Our AI for UX Research Course

We designed our AI for qualitative analysis course with exactly these principles in mind. It's not a course about replacing your judgment with AI. It's a course about building a workflow where you strategically deploy AI where it will increase quality and speed. Here's what makes this approach different:

Method comes first, AI comes second. The course starts with the foundations of rapid qualitative analysis—the rigorous, defensible approaches developed by researchers like Maxwell, Patton, and Miles & Huberman. Only after establishing this foundation do we introduce AI tools. Because here's what I've learned from years of qualitative research: you can't evaluate an AI output if you don't know what good analysis looks like in the first place.

You'll work with real data. Each week includes hands-on practice with actual interview datasets. You'll code data both manually and with AI assistance, then compare the results. This isn't just about learning prompts; it's about developing the critical eye to know when AI nails it and when it misses crucial nuance.

The course builds to something you can use immediately. Your final deliverable isn't a paper or a theoretical framework, it's your own customized rapid analysis workflow. You'll walk away with:

  • An end-to-end workflow you can use right away
  • A clear understanding of the abilities and limitations of the tools you've chosen
  • Practical prompt libraries tested on real qualitative data


It respects the complexity of your work. This isn't "10 ChatGPT prompts for instant insights." Over three weeks of live online sessions, we dig into the real tensions you face: balancing speed with stakeholder credibility, maintaining transparency about AI's role, managing team dynamics in collaborative analysis, and designing research that sets you up for efficient analysis from the start. 

You'll learn from someone who's been in the trenches. I built this course after years of conducting qualitative research in fast-paced environments where "we needed it yesterday" was the standard timeline. I've evaluated what AI does well and not so well, and I've experienced the power of using it strategically. This course distills those lessons into a practical, immediately applicable framework.
Created with