Methodology
Synthetic consumer research vs synthetic surveys: where it actually replaces panels
In a 30-day window, four of the loudest names in research all shipped the same idea under different labels. Harvard Business Review ran a cover piece on scaling qualitative research with AI (HBR, Apr 6). MIT Sloan published a consumer-insight feature on generative AI. Qualtrics launched synthetic panels at X4 Summit. quantilope shipped "Category Twins." The category finally has a name: synthetic consumer research.
There's one problem. The name covers two very different products. One replaces the first research sprint. The other tries to replace the last. If you don't know which one a vendor is selling you, the pitch sounds identical. The price and the use case aren't.
Key Takeaways
- HBR, MIT Sloan, Qualtrics, and quantilope all shipped synthetic consumer content or product in a 30-day window, and the category now has a name.
- Synthetic consumer research works upstream of a panel (idea triage, concept screening). Synthetic surveys try to sit on the end of a panel (weighting, missing-data fill).
- Bain reports synthetic tests at roughly one third the cost and half the time of traditional methods (Bain & Company, 2026).
- The wedge for small teams is the pre-panel sprint: testing 10 concept variants before committing budget to one human study.
For the methodology underneath all of this, see the science behind synthetic consumer research.
What is synthetic consumer research?
Synthetic consumer research generates responses to product concepts from LLM-simulated personas, then scores the responses for purchase intent. Bain & Company reports its internal synthetic-customer program delivers comparable insight quality at roughly one third the cost and half the time of traditional methods (Bain, 2026). It isn't a survey. It's a pre-survey idea-triage layer.
The plain definition is short: AI-generated respondents scored against a concept. The inputs are a product description, a target audience definition, and sometimes brand tracking data. The outputs are a purchase intent read, qualitative feedback, and a concept ranking you can sort.
Vendors are now plentiful. Qualtrics synthetic panels. quantilope Category Twins. PyMC Labs. Evidenza. Synthetic Users. PwC has an internal tool. will.it.sell covers the small-team end of the market. What nobody agrees on is whether "synthetic consumer" means the persona itself, the response it produces, or the scoring layer that reads the response. That definitional gap is how two different products ended up sharing a name.

What is a synthetic survey (and why it isn't the same thing)?
A synthetic survey isn't a standalone research product. It's a technique for generating additional survey data to patch a thin or biased sample, boost the prevalence of rare segments, or weight a post-panel dataset. Kantar and Ipsos both position their synthetic tools this way. The work starts with a real panel. The synthetic part fills in around it.
Here's how the two products actually differ once you strip out the marketing:
| Dimension | Synthetic consumer research | Synthetic survey |
|---|---|---|
| Stage in funnel | Pre-panel (idea triage) | Post-panel (weighting, fill) |
| Starting data | Product concept, audience definition | Real human panel results |
| Output | Purchase intent read, concept ranking | Augmented dataset, weighted estimates |
| Replaces | The "let's just ship it" guess | Nothing you weren't already doing |
| Main risk | Directional scores taken as literal | Hidden bias in the augmentation |
Critics inside the industry call the second thing "synthetic augmentation," not synthetic research. The distinction matters because the business case isn't the same. One produces an answer from scratch. The other patches an answer you already paid a panel to produce. Vendors conflate the two because both use LLMs, both save time, and bundling the language makes enterprise sales easier.
What did the incumbents ship in the last 30 days?
Between March 17 and April 8, 2026, four incumbents shipped synthetic consumer products or cover stories. Qualtrics launched synthetic panels at X4 Summit (SiliconANGLE, Mar 18). quantilope launched Category Twins (PR Newswire, Mar 17). HBR ran "How AI Helps Scale Qualitative Customer Research" (Apr 6). MIT Sloan published "Gain Consumer Insight With Generative AI" (Sloan Review, Apr 8).
Look at where each product actually starts its workflow.
Qualtrics says its synthetic panels are "much less expensive" than traditional research and deliver "insights in hours instead of weeks" (SiliconANGLE, Mar 18, 2026). The training set is decades of Qualtrics study data. U.S. audiences are live now. UK, Ireland, Canada, Australia, and New Zealand follow in H1 2026.
quantilope Category Twins starts somewhere different. It's built on the client's own brand tracking data and updates automatically with each new tracking wave. The tagline calls it "early-stage." The starting condition is an existing tracker.
HBR's piece, by Jeremy Korst, Stefano Puntoni, and Olivier Toubia, is the only one of the four that doesn't start from an existing panel or tracking dataset. It describes AI qualitative interviewers holding rich conversations with large numbers of respondents, compressing qualitative research cycles. The authors frame AI as a scaling tool for researchers, not a replacement.
Three of four workflows start with either a panel or the client's own tracking data. That's no accident. Survey incumbents productize synthetic as a bolt-on to their panel business because their revenue depends on the panel. Their synthetic tools are scoped to defend the panel franchise, not to replace the first research sprint. If you're shopping for a way to triage 10 product concepts on a Wednesday, you aren't in the market these products are built for.
Where synthetic consumer research actually replaces a panel
Synthetic consumer research replaces the panel in exactly one place: the first sprint. When you have 10 concepts, a few hundred dollars, and a Wednesday afternoon, synthetic scales to a read on all 10 before you commit to a human study on one. A standard focus group project runs $10,000 to $30,000 on a 3-6 week timeline (Drive Research, 2026). Bain's internal pilot showed comparable insight quality at one third the cost and half the time (Bain, 2026). Downstream validation still needs humans.
That "first sprint" bucket has a recognizable shape:
- Concept triage. Rank 10 product ideas before building anything.
- Copy and claims testing. Compare five ad variants for which earns the strongest purchase intent read.
- Positioning stress-tests. Try price anchoring, category framing, and feature-order variants.
- Kill decisions. "This concept scored 2.1 out of 5 across every segment. We're not shipping it."
- Niche audiences you can't affordably recruit. Panel-grade studies for small segments run $15k+ just to field.
Running a B2C synthetic consumer research tool for two years has made one pattern obvious. The gain people pitch ("replace your panel") isn't the gain users actually get. The real gain is testing 10 ideas in the time a panel tests one. The panel is still the panel. What changed is the step before the panel, which used to be a gut call.
Where does synthetic not replace a panel? Regulator-facing studies. Novel categories where training data is thin. Emotional or ethnographic depth work. N=400 statistical claims. Final pricing elasticity. Synthetic gives you a directional read on whether an idea survives contact with an audience. It doesn't give you a number a CFO can bet the launch on. If you need the second thing, the panel isn't optional.

Where it does not replace a panel (and where synthetic surveys legitimately help)
The places synthetic consumer research fails are the places synthetic surveys sometimes quietly help. Augmenting a real panel with simulated responses for rare demographic cells. Weighting post-collection data. Running "what-if" scenarios on an already-collected dataset. Kantar and Escalent both draw this line in their 2025 and 2026 coverage. The pattern is consistent: synthetic survey work starts from a real panel and patches around it.
What that means in practice:
- Regulator-ready sample. Needs humans. End of story.
- Novel categories. LLM training data doesn't contain, say, cricket-cookie buyers. Synthetic struggles on emotional or sensory uniqueness.
- Final pricing elasticity studies. Still panel work.
- Sensory research, ethnographic depth, observation-based methods. Not a synthetic use case at all.
- Thin-segment boost or post-hoc weighting. This is where synthetic surveys are legitimately useful. Different product. Different job.
The rule is simple. If the decision is reversible and cheap, synthetic first. If it's irreversible or expensive, panel confirmation. Most teams don't get burned on the methodology. They get burned on using the wrong method for the decision they're actually making.
A decision matrix: which method for which moment?
Most errors in this space are vendor-first errors. Someone buys a tool and then looks for a decision to use it on. Flip the order. Pick the method before the vendor.
| Your moment | Synthetic consumer research | Real panel | Synthetic survey |
|---|---|---|---|
| 10 concepts, need to pick 1 | Yes | No | No |
| 1 concept, need to confirm | No | Yes | No |
| Regulator or investor claim | No | Yes | No |
| Real panel ran thin on a segment | No | No | Yes |
| Niche audience hard to recruit | Yes (directional) | Yes (if budget) | No |
| Final price elasticity | No | Yes | No |
Three rules sit on top of that table. Start with which decision you're making, not which tool you're buying. Treat synthetic consumer research as a discovery tool, not a confirmation tool. And if a vendor pitches synthetic as an end-to-end panel replacement, ask which studies they won't do this way. The honest vendors have an answer. The rest don't.
What this means for small teams without a panel budget
The March 2026 enterprise launches are pointed at companies that already buy panels. For solo founders, bootstrappers, and small product teams, the real opportunity is different. Synthetic consumer research is the first research method priced for teams that have never run a panel study in their lives.
Here's the cost stack. A single focus group project runs $10,000 to $30,000 (Drive Research, 2026). Qualtrics' synthetic panel lands "much less expensive" than that (SiliconANGLE, 2026). A lightweight synthetic consumer research report (for example will.it.sell) runs under $50. The decision for a small team isn't "replace my panel." It's "can I do any research at all?"
For a team testing 10 concepts a month, a panel was never the reference point. The reference point was guessing. Moving from guessing to a structured synthetic read isn't a marginal upgrade. It's a different category of decision-making. You don't become a researcher doing this. You get a structured way to be wrong earlier and cheaper, which is how every product team starts anyway.
One watch-out before you treat the scores as gospel. Synthetic absolute scores are directional, not literal. A concept that scores 4.1 isn't literally 4.1 out of 5 in the population. It's probably ahead of a concept that scores 3.2 in the same test. Use rank-order signals, not population estimates.
See what a synthetic consumer read looks like on a real product.
Frequently asked questions
Can synthetic respondents replace human panels?
For first-sprint idea triage, yes. For regulator-facing or high-stakes launches, no. Bain reports synthetic tests at one third the cost and half the time of traditional research (Bain, 2026). That gain shows up in discovery work, not final validation. The practical model is sequential: synthetic first for the 10, humans for the winner.
What is the difference between synthetic consumer research and a synthetic survey?
Synthetic consumer research produces a whole answer from LLM-simulated personas scoring a product or concept. A synthetic survey uses generative AI to fill gaps in a real human panel: rare segments, missing cells, post-hoc weighting. Kantar frames synthetic surveys as panel augmentation. The two products aren't substitutes for each other. They sit at opposite ends of the funnel.
How accurate is synthetic consumer research?
Bain reports roughly 85% accuracy against human responses for its internal program (Bain, 2026). Accuracy varies by methodology, training data quality, and category familiarity. Novel or sensory categories drop the number. Mainstream consumer categories with dense training data hold up better. Use rank-order signals, not absolute percentages, and you'll be right more often than you're wrong.
Is synthetic consumer research worth it for a solo founder?
If your alternative is a panel study you can't afford, yes. If your alternative is guessing, yes. If you already have $30,000 and three months for a full human panel, synthetic is still useful as a pre-screen, but you've got the budget to do the full job. The value scales inversely with your existing research budget.
For the underlying scoring mechanics, see how to measure purchase intent.
The bottom line
The category finally has a name. That's good news. But the name covers two different products, and the March 2026 enterprise launches mostly aim at the downstream one, because that's where the panel-vendor business model lives. The open wedge is the upstream one: pre-panel concept triage priced for teams that never bought a panel study.
Before you buy a tool, name the decision. If the decision is reversible and cheap, synthetic first. If it's irreversible or expensive, humans always. The vendor pitch will try to blur that line. Your job is to keep it sharp.
Stop guessing. Start knowing.
Your first product validation is free. Get your report in minutes.
Test Your Product Idea Free