Overview
AI for UX Research Course Syllabus
Day One: Building an AI-Supported Data Transformation Strategy
Day One Goals
-Establish what makes great qualitative analysis
-Introduce AI fundamentals for analysis
Content Overview
Two Pillars of Qualitative UX Research: Good research must be both trustworthy (credible, confirmable) and useful (telling stakeholders something genuinely new that informs decisions).
Data Transformation Strategy: Every methodological "move" is an interpretation that deepens insight and builds trust; the more transformations (steps, iterations, perspectives) you layer in, the stronger the research.
Types of Transformational Moves: Three categories drive quality: Method Steps (intentional tasks), Iterations (adapting based on learning), and Perspectives (additional viewpoints, including AI as a virtual stakeholder).
Thematic Analysis Fundamentals: Thematic analysis works by reducing data, grouping similar chunks, and articulating the connecting thread — moving from raw data through coding and theming to final insight.
Why LLMs Are Well-Suited for Thematic Analysis: LLMs are sophisticated pattern-matching machines that work through tokenization, vector embeddings, and attention layers, making them naturally aligned with the categorization work at the heart of thematic analysis.
Activities
Your Current Analysis Workflow: Participants mapped their existing process from data collection to finished insights in FigJam, as granularly as possible, including any tools (AI or otherwise).
Early Opportunities: Participants reflected on which types of transformations were well- or under-represented in their current workflow, and identified places to add more.
Experiment: Role Prompting: Participants tested two slightly different prompts ("UX researcher" vs. "qualitative researcher") on the same transcript to observe how small role changes affect AI output.
Day Two: Designing a Strategy to Get the Most out of AI
Day Two Goals
-Build practical skill in prompting LLMs for qualitative analysis
-Develop a strategic framework for deciding how and when to use AI in thematic analysis
Content Overview
LLM Features That Impact Qual Analysis: Four key characteristics shape how LLMs perform in research contexts: their probabilistic (non-human) reasoning, limited context windows, the unconscious influence of training data, and linguistic sensitivity to word choice — each with specific prompting implications.
Context Window Management: LLMs can only process a finite amount of data per session, and exceeding that window causes the model to skim or forget earlier framing; researchers must strategically reduce data (via RAP sheets, timestamping, matrices, etc.) and manage memory settings to maintain analytical integrity.
Prompting Best Practices: Effective prompts front-load role assignment and project context, break tasks into discrete analytic steps, use cognitive prompting to counteract the "Golden Retriever" tendency to jump to conclusions, and require chain-of-thought rationale and data-grounded evidence.
Thematic Analysis Methodology: The session covered the full pipeline from raw data to insight: data reduction, inductive vs. deductive coding, codebook construction, theme development, and translating themes into business insights — with clear articulation of where AI is strong (pattern recognition/deductive coding) versus weak (contextual significance/insight generation).
AI Strategy Models: Three collaboration frameworks are introduced — the Apprentice Model (researcher leads, AI follows), the AI Associate Model (AI does heavy lifting with researcher checkpoints), and the CHALET-style model (human and AI code in parallel, disagreements are treated as generative analytical moments rather than errors).
Activities
Context Prompt Comparison: Participants run two prompts on the same transcript using different project context (broad AV project vs. specific AV rideshare business goal) to observe how framing shifts the AI's summary output.
Inductive Coding: Human vs. AI: Participants first read 1-2 YouGo transcripts and developed 3 codes themselves to answer the research question "What are barriers to AV airport pickup?", then ran a structured prompt asking the AI to do the same, with role assignment, rationale, and example quotes. Participants then graded the AI's output as if reviewing a fellow researcher's work.
Day Three: Building Master Prompts and AI Agents that Execute Your Analysis Strategy
Day Three Goals
-Design a master prompt that reflects your analysis strategy
-Learn to build reusable AI agents that add speed and efficiency to your workflow
Content Overview
AI<>Human Interaction Patterns: We'll review the two interaction patterns that provide a starting point for building an AI analysis strategy:
-The Apprentice Model: preserves the initial shaping power of the researcher, and provides review points throughout the process where the researcher can correct and push back against the AI's output.
-The Associate Model: harnesses the LLM's strength at pattern finding, using the LLM up front for discovery, while still preserving a central role for the researcher.
Review Master Prompt Templates: Using a master prompt enables you to introduce strong framing over your session with the LLLM, before before it digs into your data. We'll discuss how to use master prompts to lay down the interaction pattern you prefer (Apprentice, Associate, or hybrid).
AI Agents: We'll explore the wild world of AI agents, which present a big advantage over the chatbot format: they separate instructions and context into a configuration stage, which frees up tokens for processing power. Plus, they can be reusable over multiple projects, which creates a ton of efficiency.
AI Agents as Solution to the Context Problem: Once we understand agents from a technical standpoint, we'll explore the ways they can help solve the problem that plagues chatbots: context awareness. Agents can do extensive research to help build their understanding of context, which means we can use them more effectively to generate insights that will land with stakeholders.
Activities
Draft your own master prompt: Participants will work on drafting the steps of their analysis strategy in the prompt format given, then practice at least one run-through with our sample dataset.
Build an AI Agent: Using their tool of choice, participants will configure an AI agent and test on our sample dataset.