When you conduct a user interview, you believe you’re gathering objective insights. You’re not. Every interview is a filtered construction—a conversation where both you and the participant bring preconceptions that shape what gets said, what gets heard, and what gets remembered. The best researchers in the world aren’t immune to this. The difference is they know the specific ways their data gets distorted, and they have systems to counteract it.
Cognitive biases in user interviews aren’t abstract psychology concepts. They’re concrete, identifiable distortions that will cause you to build the wrong product, prioritize the wrong features, and draw the wrong conclusions from the exact same data that another researcher would interpret correctly. If you’re not actively fighting these biases, your interview data is compromised before you’ve even transcribed it.
This article covers the five cognitive biases most likely to corrupt your user interview data. For each, I’ll explain the mechanism, show you a real interview scenario where it distorts findings, and give you practical mitigation strategies you can implement immediately.
Confirmation bias is the tendency to seek, interpret, and recall information in a way that validates pre-existing beliefs while ignoring contradictory evidence. In user interviews, it operates on two levels: the interviewer’s interpretation and the participant’s responses.
As an interviewer, you arrive with hypotheses. Maybe you’ve already decided the onboarding flow is the problem. Every answer from the participant gets filtered through this lens. When they describe their workflow, you unconsciously direct follow-up questions toward confirming your theory. “Tell me more about the onboarding” becomes “Walk me through the specific moment you got confused during onboarding”—the framing already assumes confusion exists.
The research is stark. A study published in Psychological Science found that researchers who expected certain results were significantly more likely to interpret ambiguous data as supporting their hypothesis. Another study in the Journal of Consumer Research demonstrated that interviewers generated more hypothesis-confirming questions than disconfirming ones by a factor of nearly three to one.
Here’s how this plays out in practice. You’re interviewing a user about a checkout flow. Your hypothesis is that the form fields are confusing. The participant says, “I mostly just scan the page and click through.” A confirmation-bias-free response would be: “What do you mean by scan? Walk me through what you’re looking at.” But if you’re under the influence of confirmation bias, you hear “scan” and immediately follow with “Is it because the labels aren’t clear?” You’ve now put words in their mouth and extracted false confirmation.
Mitigation strategies:
The halo effect occurs when an initial positive impression of someone (or their circumstances) influences how you interpret their subsequent behavior. In user interviews, it works like this: if a participant is articulate, friendly, and seems knowledgeable, their feedback gets weighted more heavily. Conversely, a nervous or dismissive participant’s valuable insights get discounted.
This bias is particularly dangerous because it masquerades as good judgment. You’re not being biased—you’re just correctly identifying that this person seems more credible, right? Wrong. Articulateness doesn’t correlate with the accuracy of their feedback about your product. A nervous participant who struggles to articulate might actually be giving you a more realistic view of how average users experience your interface.
Nielsen Norman Group has written extensively about this, noting that researchers frequently rate participants as “high-value” or “low-value” within the first five minutes of an interview—then weight all subsequent feedback accordingly. A participant who describes their thought process eloquently might receive 40% more follow-up time and more charitable interpretation of ambiguous statements.
Consider this scenario: You interview two users about the same feature. User A is a software engineer who uses precise terminology, makes eye contact, and seems confident. When they say, “This is fine, I guess,” you interpret it as mild approval—surely someone this sophisticated would flag real problems. User B is less articulate, glances at the screen nervously, and hesitates before answering. When they say, “This is fine, I guess,” you interpret it as genuine satisfaction. You’ve applied two completely different standards to identical feedback.
Mitigation strategies:
Social desirability bias is the tendency of participants to answer questions in a way they believe will be viewed favorably by the interviewer—regardless of whether the answer is true. This is perhaps the most pervasive bias in user interviews, and it’s nearly impossible to eliminate completely.
People want to appear competent, helpful, and rational. They don’t want to admit they can’t figure something out, that they gave up on your product, or that they don’t understand basic functionality. Studies consistently show that participants over-report positive experiences and under-report problems, particularly when interviewed by someone associated with the product (you).
Research from the Journal of Applied Psychology found that participants misreported their behavior by as much as 30% when they believed the interviewer expected positive responses. The effect is stronger in face-to-face interviews than online surveys, and it’s strongest when participants perceive a power differential—which is exactly the dynamic in a typical user interview.
Here’s where it gets insidious. You’re testing a new dashboard interface. A participant spends 45 seconds staring at the screen, clicking randomly, and muttering under their breath. You ask, “How was that experience?” They pause, then say, “It was fine. I think I just need to get used to it.” That’s social desirability bias in action. They’ve just told you they’re willing to put in effort to learn your confusing interface rather than admit they couldn’t figure it out. If you take this at face value, you’ll optimize for “people willing to learn” rather than “people who can immediately understand.”
Another common manifestation: participants tell you what they think you want to hear about their own behavior. “I’d definitely use this every day” feels like positive feedback, but it’s often aspirational rather than predictive. They want to seem like the kind of person who would use sophisticated tools.
Mitigation strategies:
Recency bias is the tendency to give disproportionate weight to the most recent information encountered. In user interviews, this manifests in two ways: interviewers over-weighting the last interview they conducted, and participants over-weighting their most recent experience with your product.
If you interview five users on Tuesday and one on Wednesday morning, your analysis will almost certainly be skewed toward the Wednesday participant’s feedback—not because it represents a trend, but because it’s fresh in your memory and feels more salient.
The cognitive mechanism is well-documented. Information that is most recently encoded in memory is easier to retrieve and therefore appears more salient during decision-making. In Thinking, Fast and Slow, Daniel Kahneman describes how this affects even expert judgments, making recent data appear more representative of patterns than it actually is.
In practice, this means your synthesis sessions become hostage to whoever you interviewed last. The participant from Thursday afternoon dominates your thinking because their specific frustrations with error messages are still vivid. The first four interviews from earlier in the week have blurred together into general impressions—which may or may not accurately represent what they said.
Participants exhibit recency bias too. When you ask “Walk me through your experience with this product,” they’re likely describing their most recent session, not their typical experience. If they used it successfully yesterday, they’ll under-report problems they encountered last week. If they had a bad experience this morning, their overall assessment will be more negative than warranted.
Mitigation strategies:
Anchoring bias occurs when people rely too heavily on the first piece of information they encounter (the “anchor”) when making subsequent judgments. In user interviews, it affects both how you interpret answers and how participants form their responses.
As an interviewer, the first few minutes of the conversation establish a frame that everything else gets evaluated against. If you open with “We know the current checkout has problems,” you’ve anchored the participant’s subsequent answers around the assumption that problems exist. Even if they didn’t experience any, they may start searching for something to report because you’ve implied it’s expected.
Within interviews, the order of questions creates anchors. If you ask “What did you think of the new design?” before asking about their actual usage patterns, you’ve anchored their evaluation on visual appearance rather than functional utility. If you ask about satisfaction before asking about problems, you’ve anchored their responses toward positivity.
Research on anchoring is particularly sobering. In one famous study, participants were asked to estimate the percentage of African countries in the United Nations. Before answering, they were shown a random number generated by a spinner. Participants who saw “65” guessed an average of 45%, while those who saw “10” guessed 25%. The spinner had no relevance to the question, yet it profoundly influenced estimates. This is the power of anchoring—even arbitrary numbers shape subsequent judgment.
In your interviews, the product demo or task instructions you provide become anchors. If you say “This feature uses machine learning to personalize recommendations,” you’ve anchored user expectations in a way that will color their evaluation. If you say nothing and let them discover the feature, you’ll get more honest feedback about actual value.
Mitigation strategies:
Understanding these five biases is only half the battle. You need systemic practices that reduce their impact on your research.
Pre-interview preparation:
During the interview:
Post-interview analysis:
The most effective technique is building a culture of bias awareness. When everyone on your research team can identify when they’re being affected by these distortions, correction becomes natural rather than forced.
How do I know if my interview data is biased?
The most reliable indicator is surprise. If your findings never contradict your initial hypotheses, you’re likely experiencing confirmation bias. If one participant’s feedback dominates your memory, you’re experiencing recency bias. Regular audits of your own process are essential.
Can I eliminate cognitive bias completely?
No—and attempting to is counterproductive. These biases are cognitive shortcuts that help us function. The goal is mitigation: reducing bias to a level where it doesn’t systematically distort your findings.
Should I tell participants about these biases?
Not directly. However, creating an environment where participants feel safe giving negative feedback implicitly addresses social desirability bias without clinical explanations.
Do these biases affect experienced researchers differently?
Experienced researchers are subject to the same biases but may be better at recognizing them in hindsight. Studies suggest expertise reduces some biases but creates new ones, like overconfidence in pattern recognition.
The uncomfortable truth is that your user interview data is never perfectly objective. It can’t be. Human cognition doesn’t work that way, and the sooner you accept this, the sooner you can build systems that account for it.
What you can do is stop pretending these biases don’t affect you. The researcher who acknowledges confirmation bias and actively seeks disconfirming evidence will outperform the one who believes they’re naturally unbiased. The team that scripts neutral questions and uses structured observation will make better product decisions than the one that trusts their intuition.
Your user’s voice is already faint enough—filtered through what they think you want to hear, what they can articulate, what they remember, and what they believe is normal. Don’t let your own cognitive architecture add another layer of distortion on top of that.
The next time you conduct an interview, pick one bias from this list and specifically design your protocol to counteract it. Track the difference in your findings. That’s how you move from believing you’re unbiased to actually reducing the noise in your data.
Complete TikTok Shop guide for 2025: Learn proven strategies to sell products and explode your…
Discover the biggest social media trends 2024 that are reshaping digital marketing. Learn what's working…
Discover the top social media marketing trends 2024 to boost your brand. Learn proven strategies…
Master social media marketing in 2025 with our complete guide. Boost engagement, grow your following,…
Social media marketing strategies 2024: proven tactics that work. Learn how to grow your following…
Discover the most effective social media marketing strategies in 2024. Learn proven tactics to grow…