User research Study Guide
Study Guide
📖 Core Concepts
User Research – Systematic study of users’ behaviors, needs, and motivations using interviews, surveys, usability tests, etc.
Iterative Nature – Research → insights → design proposals → prototype → test → repeat; continues even after launch.
Human‑Centred Perspective – Empathy‑driven; data are “humanized” to keep real people at the centre of decisions.
Research Types
Generative (Exploratory) – Discover what problems exist; early‑stage, open‑ended methods (interviews, observations).
Descriptive (Explanatory) – Define who has the problem and how it looks; characterises known issues.
Evaluative – Test whether a solution works; usability testing, task performance, subjective ratings.
Causal – Explain why a behavior occurs after launch; A/B testing, usage analytics.
Nielsen Norman Three‑Dimensional Framework
Attitudinal vs. Behavioral – What users say they think vs. what they actually do.
Qualitative vs. Quantitative – Open‑ended, rich data vs. numerically coded data for statistics.
Context of Use – Natural (real‑world) setting vs. scripted laboratory setting.
Common Deliverables – Research reports, personas, journey maps, mental‑model diagrams, low‑fidelity wireframes, storyboards.
Business Benefits – Saves redesign costs, provides evidence for stakeholder decisions, creates desirable experiences.
---
📌 Must Remember
User research is core to user‑centered design and can occur at any development stage.
Generative → Descriptive → Evaluative → Causal is the typical progression.
Attitudinal data → beliefs/opinions; Behavioral data → actual actions.
Qualitative = depth; Quantitative = breadth & statistical power.
Natural context = high ecological validity, limited clarification; Lab context = high control, can ask follow‑ups.
Personas are archetypes derived from data, not real individuals.
A/B testing is a causal method, not an evaluative usability test.
---
🔄 Key Processes
Define Problem – Use generative methods (interviews, observations) to uncover pain points.
Synthesize Insights – Identify target users, rank problem importance.
Create Prototypes – Low‑fi sketches or wireframes based on insights.
Evaluative Testing – Conduct usability tests with representative users; iterate on design.
Launch & Observe – Deploy solution; collect behavioral data (analytics, click‑rates).
Causal Analysis – Run A/B tests or usage studies to explain adoption/rejection patterns.
---
🔍 Key Comparisons
Generative vs. Descriptive
Goal: discover problems vs. define characteristics of known problems.
Methods: open interviews/observations vs. surveys/secondary data.
Evaluative vs. Causal
Goal: test solution usability vs. explain why users behave a certain way post‑launch.
Methods: usability testing vs. A/B testing, analytics.
Attitudinal vs. Behavioral
Attitudinal: “I find this confusing.”
Behavioral: Click‑through logs show users avoid the feature.
Qualitative vs. Quantitative
Qualitative: rich narratives, themes.
Quantitative: numeric ratings, statistical significance.
Natural Context vs. Lab Context
Natural: users in their own environment, minimal researcher interference.
Lab: controlled tasks, ability to probe in real time.
---
⚠️ Common Misunderstandings
“Research only happens before design.” – It’s iterative; research continues after launch (causal).
“Surveys = quantitative.” – Open‑ended survey responses are qualitative.
“A/B testing is just UI tweaking.” – It’s a causal method to uncover why a change works or not.
“Personas are actual users.” – Personas are fictional composites built from real data.
“Qualitative data isn’t trustworthy.” – It provides essential context that numbers alone can’t capture.
---
🧠 Mental Models / Intuition
Compass Model – Think of research as a compass: generative points the direction, descriptive maps the terrain, evaluative checks the route, causal explains detours.
3‑Axis Graph – Visualize the Nielsen Norman framework as a 3‑dimensional cube (Attitudinal ↔ Behavioral, Qualitative ↔ Quantitative, Natural ↔ Lab). Each research method lands in a specific quadrant.
Funnel Analogy – Start wide with generative (many questions), narrow through descriptive, focus on specific solutions in evaluative, and finally pinpoint cause‑effect in causal.
---
🚩 Exceptions & Edge Cases
Time‑boxed projects may skip full generative research and rely on secondary data, but still need at least a quick descriptive scan.
Small user populations → mixed‑methods (qualitative depth + limited quantitative) to compensate for low statistical power.
Remote usability → still evaluative, but context of use shifts toward “natural” despite being virtual.
Highly regulated domains (e.g., medical devices) may require stricter ethical consent even for informal interviews.
---
📍 When to Use Which
| Situation | Best Research Type | Reason |
|-----------|-------------------|--------|
| You don’t know what problems exist | Generative | Open‑ended methods uncover hidden needs. |
| You have a known problem & need to profile users | Descriptive | Quantifies who, how many, and context. |
| You have a prototype to validate | Evaluative | Directly measures task success & usability. |
| Product is live & you need to explain adoption trends | Causal (A/B, analytics) | Isolates variables that cause behavior. |
| Need rich insights (why users feel a way) | Qualitative (interviews, diary) | Captures attitudes, motivations. |
| Need statistical confidence (how many) | Quantitative (surveys, click‑rates) | Allows hypothesis testing, generalisation. |
| Testing in real environment is critical | Natural context methods | High ecological validity. |
| Testing specific interactions under control | Lab context methods | Precise measurement, ability to probe. |
---
👀 Patterns to Recognize
Iterative language (“repeat”, “refine”, “loop”) signals the need for another research cycle.
“Attitudinal” paired with surveys → expect Likert‑scale or open comments.
“Behavioral” paired with analytics → look for click‑rates, time‑on‑task metrics.
“Context of Use” often precedes a decision about remote vs. in‑lab testing.
“Evidence‑based” in business cases signals a quantitative deliverable (report, dashboard).
---
🗂️ Exam Traps
Choosing A/B testing for early‑stage concept validation – A/B is causal; early ideas need generative or evaluative methods.
Assuming personas equal user interviews – Personas are synthesized artifacts, not raw data.
Selecting “lab context” because it’s easier – May lose ecological validity; exam may ask which context is appropriate for “real‑world adoption”.
Mixing up attitudinal vs. behavioral – A statement like “Users say they love the feature” is attitudinal; actual usage statistics are behavioral.
Thinking qualitative data can’t be reported – Deliverables like journey maps and mental models are qualitative outputs.
---
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or