The Most Misunderstood Divide in Research: Quant vs Qual
Correcting cross-functional confusion by clarifying the unique value of different research methods
With a beer or two in hand, I was once chatting with an engineering partner at a product team social. We were discussing how various roles contribute to a cross-functional team, and he asked a question that stuck with me: “qualitative research is pretty light work, isn’t it?”. As a research manager, I knew how much work went into designing and analyzing user interviews and diary studies (the two most common research methods on my team at the time), and “light” would be the last adjective to come to mind if I were asked to describe the process.
In any case, I’m glad he spoke his mind, because it revealed that I wasn’t doing enough to communicate the unique purpose, practice, and value of different research methods. In the spirit of moving quickly and keeping meetings short, it’s common for researchers to talk about outputs and recommendations when sharing research with product teams but less about process. Methodological and procedural details are often included in slide decks as “if there’s time” slides, but news flash, there’s rarely time.
Most researchers and their partners understand the foundational facts about why we use qualitative and why we use quantitative research methods. But I generally see this as one of those spaces where misinterpretation is rampant despite widespread surface knowledge. It’s similar to the “correlation is not causation” issue: practically everyone is aware of this principle, and yet, correlation is frequently interpreted as causation in friendly chats, mainstream media articles, and even academic publications! In the same way, tech teams generally know that qualitative research isn’t designed to provide insights that reliably generalize to a population, and yet you’ll still hear complaints about “small sample sizes in qual research”.
So here’s my attempt to highlight where I think the biggest and most consequential confusions are in defining the purpose of quant vs qual and where researchers often go wrong in communicating this information with cross-functional partners. I’ll end with a few suggestions on how we can all help to remedy this problem in our own work environments.
Aligning on the foundational functions
What is quantitative research for?
Quantitative research usually falls under one of two high-level missions:
Understand whether a group of interest statistically differs from a control group in some core outcome. For example:
Do people with the latest version of our app have better engagement than people with the previous version of our app?
Are our new users significantly less tech savvy than our existing users?
Does user segment A get less value from our product than user segment B does?
Understand the scale of a particular problem, reaction, or perception in the population. For example:
What’s the most common reason that teens don’t use our product?
What proportion of users adopt our product for the purpose of fixing a relationship issue?
How are our product engagement metrics evolving over time?
Good quant research will start by deciding on an appropriate sample to answer questions of interest based on desired/expected effect sizes, and alignment between sample characteristics and population characteristics. The point is to collect signals and draw conclusions that generalize well to the population as a whole, such as: “group A in our sample had significantly lower churn than group B, therefore we can assume this difference also exists across our user population”. Or alternatively: “64% of users in our sample reported experiencing X, therefore we can assume that 64% of users across our user population experience X”.
These are definitive statistics-based conclusions that can come out of A/B testing and/or surveys, and they can provide reliable indicators of how a particular product launch will turn out or how prevalent a problem/need is for a user population.
Surveys have a mixed reputation among product team partners. Some find the outputs too simplistic to interpret, and as I’ll explain later, this is often true. Survey insights are great when there are clear, specific, and concrete questions to ask and the mechanisms underlying user experiences are well understood. However, they’re less good when you have a giant problem space and aren’t sure how to narrow it down into a small number of concrete questions.
Another criticism of surveys argues they suffer from similar pitfalls to qual research such as relying on self-report data from respondents. Well, we currently have no other good option for understanding people’s thoughts and feelings other than asking them. Perhaps in the future, brain imaging will be good enough to create a readout of a person’s consciousness, but for now, that’s impossible. A/B tests and behavioral experiments can only tell you what people do, not what people think or feel, and sometimes a product team needs the latter to move forward.
It’s not easy to develop good surveys so there’s a large amount of work and pre-testing that often goes into deciding which questions to ask and how to ask them. For example, to ensure that survey respondents understand questions in a similar way and provide data that can be effectively averaged, quant researchers will often run their questions through multiple rounds of qualitative cognitive testing and look for the optimal language that facilitates consistent and meaningful data.
Surveys are an essential part of the research and product toolkit, but like any tool, they need to be used well. They’re not great in a poorly defined problem space and they’re fairly useless when you need to reactively adjust questioning based on user responses. To effectively drill into a problem space without pre-defined lists of options, you can turn to qual research.
What is qualitative research for?
Qualitative research has a broader mission. Instead of looking for population-level statistics and conclusions, it’s designed to understand a user’s thinking process and get a better foundational grasp of how users perceive a high-level problem area.
Rather than asking questions such as “how many times a week do you experience X?” followed by “how many times a week do you experience Y?”, good qual research instead uses a conversational script along the lines of: “tell me about the last time you experienced X”, “where were you, who were you with?”, “how did it make you feel?”, “what might help you better deal with X?”, etc.
Although there are a number of qual approaches to choose from, user research in tech will often follow a semi-structured interview format with specific priority areas to cover but also a flexible question space to allow users to freely express what’s on their mind. Surveys are all about probing and funneling - they want a very specific answer within a strict set of pre-defined criteria. Interviews are instead about discovery within the unknown.
Interviews allow you to explore, identify, and build on open-ended themes identified by the user. In surveys you can’t build on anything surfaced by a user, because respondents simply have to move on to the next pre-defined question after answering the previous one. Survey questions that allow open-text responses aren’t much better - respondents tend to limit what they say there and often struggle to fully articulate in text what they would find easier to articulate through speech.
So surveys restrict your ability to follow up on talking points that you didn’t know existed before a user brought them up. But interviews restrict your ability to draw reliable conclusions about the scale of a particular problem, perception, or outcome among your user base. Qual research is for flexible exploration while quant research is for statistics-based conclusion.
Uniting qual and quant for powerful recommendations
Given their relative strengths and weaknesses, product teams that utilize both qual and quant research instead of relying on one over the other tend to make the best product decisions.
The best qual insights my team or I have ever found could not have been found through quant insights. For example, a qual researcher on my team once discovered during their user interviews that some users were hesitant to use a new feature because they feared it would affect their visibility among friends on the platform—a completely unsubstantiated fear that designers and developers had no inkling would be relevant. Given that lack of inkling, it’s unlikely that a quant survey would have identified the same concern through pre-defined questioning. So without qual research, that user fear would have lingered on the platform affecting feature engagement rather than being solved through a relatively straightforward content design adjustment.
Similarly, the best quant insights I’ve seen would have been impossible to detect through qual research. In past work, quant researchers on my team identified that close to 60% of a prospective user population experienced a specific type of stress relevant for our product, in contrast to our prior expectations of that number being around 25%. You simply couldn’t get a reliable indication of that kind of user need prevalence from interviewing twelve users.
Using qual and quant research in combination at the right times for the right types of question is incredibly powerful. Here’s a non-comprehensive list of useful combinations I’ve seen:
How will people respond to this new product feature?
Step 1: Qualitative interviews to understand user reactions to an early product concept and revise designs accordingly.
Step 2: Quantitative A/B tests on user samples to see how the revised concept performs at scale.
How should we react to this user need?
Step 1: A quantitative survey to understand the scale of a specific user need before beginning sketches on a possible product concept to address that need.
Step 2: Qualitative interviews to explore how closely product sketch variants align with the specific need based on early user reactions.
Why are some people quitting the app so early?
Step 1: A diary study to track the first 7-day product experience for a group of new users and identify an emerging theme linked to early churn.
Step 2: A behavioral data analysis based on app usage to confirm a correlation between how often new users experience that theme and how likely they are to drop out of the product in their first week.
The most common mistakes
There are plenty of misunderstandings around qual vs quant research, but here are the four most common misconceptions or errors I’ve come across during my time doing research at tech companies:
Drawing population conclusions based on qual data:
It is common for researchers to conclude that “most app users experience this particular perception” based on qualitative interviews with a small number of people. Even if 10 out of 12 participants express having a particular perception during interviews, a conclusion about prevalence isn’t reliable and shouldn’t be the top-line finding of qual research. On top of that, qual research rarely targets a sample that’s representative of the broader population of interest, so even when there is a majority opinion, there’s a high risk that it’s driven primarily by demographic biases. It’s more appropriate for qual insights to stay focused on the themes around how people think through or react to a particular perception, and then brainstorm around those themes with product teams. Qualitative research is the best available method for exploratory insights and dialectical detective work, but it’s not the best available method for making claims about scale.
Using surveys for research questions that require qual exploration:
I’ve seen many data partners and quant researchers use surveys for research questions that are better answered by an interview format. Here’s what can end up happening:
A quant-heavy team runs a survey looking at how a group of people feel about a new app screen but then find a contradictory pattern within the responses based on what people say vs what people do.
They run another survey including a generic clarification question to better understand the contradiction, but realize “these responses don’t make any sense, why do people think this?”
They run a third survey with additional open-text responses only to conclude “wait, this still hasn’t clarified why people answered our initial question the way they did”.
They run another survey, and so on, ad infinitum. Instead of running increasingly complicated surveys, they could have run a small set of qual interviews and asked people “why do you say that?”, followed by “tell me more about that”, and then identified important themes in thinking processes that led specific users from perception A to reasoning B to outcome C.
There’s often an underlying distrust of qual data that leads to this problem of using quant research where it’s not the best available tool. But interestingly, many people who distrust the value of qual will spend hours in meetings with small samples of colleagues, trusting their own conversational intuitions in making hugely consequential business decisions! The truth is we all use qual data constantly, even when we don’t consider it to be research. In the same way that employees have expert perspectives in building products, users have expert perspectives in using products. Both types of qualitative input are essential for understanding a particular problem space and deciding on actionable next steps.
Assuming the “why” from survey data that only answers the “what”:
People will often be so confident in their understanding of the “why” underlying a “what” identified in a survey that they’ll launch a product feature only to discover it’s not solving the user problem they had in mind. You can only solve a problem that you identify in the population if you actually understand why that problem exists and how people think about it. And for that, qual research is a great tool. That’s not to say that quantitative surveys can never ask “why”—they most certainly can and they’ll give you a great view of how broadly that specific “why” applies across the population too (something that qual can’t do). However, they rely on you operating in a bright room where the dimensions of the problem space are clear, and that’s not always the case.
Failing to include cross-functional partners in user interviews:
This might be both the most common and easiest-to-solve mistake for researchers. The value of qualitative interviews can be difficult to comprehend if you’re detached from the interviews themselves, and product team partners are often excluded from attending research interviews. When they’re detached from the process, some partners will over-anchor on rules and intuitions like “small sample size” and “qual data doesn’t generalize” rather than noticing and focusing on the value-adding part of the research. The more people understand how quant and qual are executed and how they deliver insights, the more likely they are to embrace the strengths of each approach while remaining cognizant of the weaknesses.
What steps can we take as researchers?
Here are some activities that I’ve found useful for correcting many of the confusions in the qual vs quant tradeoff:
Educational workshop for cross-functional partners:
Workshops are a great way to bring people together around a common mission and align on important areas of foundational knowledge that aren’t obvious to all team members. This isn’t just true of spreading research knowledge to non-researchers. I’ve hugely benefited from learning about the process behind how engineers implement new features and how design partners brainstorm and feed back on each other’s ideas. A strong cross-functional knowledge of each partner’s unique role benefits everyone and allows them to work better as a unit. Any research session should reinforce the importance of a unity between qual and quant insights in generating comprehensive and reliable insights for product development.
Extreme clarity in comms and reports:
There’s nothing wrong with including disclaimers, caveats, and limitations throughout presentations to make sure cross-functional partners don’t misunderstand what the data is saying and what is or isn’t safe to conclude from the insights. It might sometimes feel like you’re stating the obvious, but the obvious is very much needed here. I find that the risks of over-sharing are far smaller than the risks of under-sharing when it comes to research reports, so I lean toward extreme clarity and worry less about repetition when there’s a conflict.
Invite people to research interviews:
I’ve never heard an engineering, data, design, or product manager partner complain about wasted time after joining an interview session as an observer. They always leave with a refreshed sense of what research is doing, why it’s so helpful, and even completely new perspectives or ideas about their own products. When you schedule interviews, don’t be afraid to ping invites to other people on your product team. It’s up to them whether they want to attend, but at least give them the option.
“We must all hang together or assuredly we shall all hang separately.”
~ Benjamin Franklin
If you enjoyed this newsletter, please spread the word by sharing it with others who might find value in it. If you’re new here, you can subscribe below.