← Back to Blog
Product validation procedure: 7-step checklist for founders

Guide

Product validation procedure: 7-step checklist for founders

By Gregor The Builder Apr 17, 2026 14 min read

CB Insights analyzed 385 shut-down VC-backed startups and found that 43% cited poor product-market fit as a top reason they failed. Most of those founders built first and asked later. That's what this procedure is designed to prevent.

Every article on product validation gives you 8 methods or 5 frameworks. You don't need another list. You need the order to run them in. This is the seven-step procedure I'm using to validate ideas for will.it.sell. Each step fits a working day. The whole thing runs in a week if focused, two weeks part-time. Built for B2C; B2B still works but interview logistics differ.

Key takeaways

  • Seven steps in order; don't skip out of sequence
  • Each step has a binary exit criterion, so you know when to stop
  • Steps 3 and 4 are where most ideas die, that's the point
  • CB Insights found 43% of VC-backed shutdowns were product-market fit failures (CB Insights, 2024)
  • If the data says no, the procedure tells you which axis was wrong and what to change

For a wider comparison of individual methods, see 8 product validation methods ranked by cost and speed.

Who this product validation procedure is for

CB Insights found that 43% of 385 shut-down startups failed on poor product-market fit (CB Insights, 2024). This procedure catches that failure mode before you spend the expensive weeks building. It's for a solo founder or two-person team with a pre-MVP B2C idea and no paying customers yet.

If you already have revenue, you're iterating, not validating. B2B with five-figure ACVs still maps to the seven steps, but interview logistics stretch the timeline.

Prerequisites are small: a one-sentence product description, a draft target audience, and about an hour a day for a week. No MVP, landing page, brand, or legal entity required. You get a go/no-go call with evidence behind it.

Citation capsule: This procedure targets B2C founders with a pre-MVP idea. Most runs take five to seven working days end to end, faster than the typical four to eight week "validate first" benchmark because Steps 4 and 5 use surveys and panels, not coded prototypes. CB Insights (2024) found 43% of startup shutdowns failed on product-market fit.

How long does the product validation process take?

A focused solo founder can finish the whole procedure in five to seven working days. Each step is scoped to one day or less. The bottleneck is always Step 3, scheduling real customer interviews. Drive Research (2023) puts standard B2C participant incentives at $75 to $150 per person, which is what you'll spend to keep recruitment tight.

Day-by-day breakdown

Day Step Activity Output
Day 1 Steps 1 and 2 Write the buying decision and narrow the audience Decision card + audience spec
Days 2-4 Step 3 Five to ten customer interviews in parallel Interview scorecard
Day 5 Step 4 Purchase intent at scale (survey, landing page, or AI panel) Corrected top-box score
Day 6 Step 5 Van Westendorp price-sensitivity check Optimal price point
Day 7 Step 6 + 7 Go/no-go review and, if needed, failure-axis diagnosis Dated decision paragraph

Week-long timeline for the product validation procedure showing seven sequential days

What slows it down

Two things. Interview recruitment: line up contacts before Day 2 or expect a slip, and budget the incentive up front. Indecision on audience: Step 2 exists to force that choice before it costs a week.

Part-time founders typically run the procedure across two to three calendar weeks. That's fine. Sequence matters more than pace.

Citation capsule: A product validation run takes five to seven working days when the seven steps are run back to back. The longest single block is customer interview recruitment; Drive Research (2023) puts B2C participant incentives at $75 to $150 per person at minimum. Budget it up front or Step 3 slips into Day 5.

Step 1 define the buying decision

Write down exactly what a "yes" looks like. A decision without a criterion is a wish. Most founders run the procedure on vibes. CB Insights (2024) found 43% of 385 shutdowns failed on product-market fit; most of those founders never defined what fit meant up front.

What to write down

  • Product idea in one sentence. Plain English. No pitch deck language.
  • The buying decision. Fill in: "A person in [audience] decides to pay [price] for [product] instead of [current alternative]."
  • The go threshold. The evidence bar that would make you build it.
  • The no-go threshold. The evidence bar that would make you kill it.

The last one is the hard part. Write it before Step 3 starts, ideally with a witness or a date-stamped note. Wait, and you'll move the goalposts once the data arrives.

Common mistake

Setting the bar after the data arrives. Feels reasonable at the time. It's motivated reasoning dressed up as analysis.

Exit criterion

Step 1 is done when you have a written decision card with the four fields above, dated, saved somewhere you can't edit silently. If you can't write "I will build if X and kill if Y" in plain sentences, go back and tighten.

Citation capsule: Step 1 writes the exact buying decision and evidence thresholds before any data is collected. CB Insights (2024) data on 385 startup shutdowns shows 43% cited poor product-market fit; most of those founders never defined what fit would look like before building.

Step 2 define the target customer

The narrower your audience, the faster Steps 3, 4, and 5 run. "People aged 18-65" isn't an audience; it's a shrug. Step 2 is done when you can describe one person by name, job, income band, and the exact moment the problem hits them. Sharp audience makes recruitment tractable. Vague one burns the week.

Audience definition checklist

  • Demographic anchor. Age range, income band, location, household role.
  • Behavioural anchor. What they currently do when the problem shows up.
  • Channel anchor. Where you can reach them this week (subreddit, newsletter, LinkedIn group, a real friend-of-a-friend chain).

How to tighten when stuck

Pick the smallest viable segment and commit. You can always widen after validation. Narrow doesn't mean "small market forever"; it means "I can reach five of them by Thursday."

For a deeper walk-through, see the full target audience guide.

Exit criterion

Step 2 is done when you can name three specific places you'll recruit interview participants from within 48 hours. If you can't, the audience isn't sharp enough yet.

Citation capsule: Step 2 narrows the target customer until a real interview is reachable within 48 hours. A sharp definition names demographic anchors, the behavioural moment the problem shows up, and a specific channel (subreddit, newsletter, LinkedIn group). Without it, Steps 3 to 5 scale poorly and the timeline slips past one week.

Step 3 confirm the problem is real with customer interviews

Talk to real people before you measure intent at scale. Don't pitch. Confirm the problem exists, costs something, and isn't already solved. Aim for five to ten interviews matching the Step 2 audience. Nielsen Norman Group's research (2000) found five users surface 85% of qualitative signal in discovery work.

How many interviews do you actually need?

  • Minimum: five. Jakob Nielsen (2000) found five participants surface 85% of qualitative issues in usability research, and the same pattern holds for problem discovery.
  • Safer: ten. Returns diminish past ten; better interviews beat more interviews.
  • Higher bar: twenty. Jason Lemkin's SaaStr (2014) "20-interview rule" is the standard for serious commitment before writing code.

Steve Blank's "get out of the building" principle (2009) is the underlying idea. Don't theorise; go find the people.

Questions that work

Rob Fitzpatrick's The Mom Test has one durable rule: ask about past behaviour, not hypothetical future intent. The good questions are the ones nobody can flatter you on.

  • "Walk me through the last time [problem situation] happened."
  • "What did you do about it?"
  • "How much did that cost you in time or money?"
  • "What would have made that easier?"

Don't describe your product until the end. Pitch, and you poison the data.

Exit criterion

At least 60% of interviewees describe the problem unprompted and can point to a time it cost them something real. Below that, the problem isn't validated and Steps 4 onward will mislead you.

Working rule: The 60% binary interview gate is non-standard. Most lean-startup writing treats interviews as open-ended "learning". I treat it as pass/fail: if six out of ten don't describe the problem unprompted, I don't progress. It saves weeks on ideas that sound plausible in my head but don't land with real people.

Citation capsule: Step 3 uses five to ten customer interviews to confirm the problem is real. Nielsen (2000) shows five participants surface 85% of qualitative signal; Jason Lemkin's SaaStr (2014) "20-interview rule" sets the higher bar before any code gets written.

Step 4 measure purchase intent at scale

Interviews tell you the problem is real. Step 4 tells you whether people would buy your solution. Most founders stop too early: five more conversations instead of a sample large enough to trust. Chandon, Morwitz & Reinartz (2005) found only about 10% of stated purchase intentions convert to actual purchases. Raw survey scores lie.

Three ways to run Step 4

  • Landing page smoke test. Target a 10%+ email signup rate on cold traffic. Common indie-hacker benchmark.
  • Purchase intent survey. Traditional Likert scale, corrected for overstatement.
  • AI synthetic consumer panel. 100+ simulated respondents in minutes, useful when you can't recruit 40 real people fast.

For a full comparison, see five ways to measure purchase intent.

Why stated intent overstates

Stated purchase intent consistently overstates actual behaviour. Chandon et al. (2005) documented the ~10% conversion rate across industries. Ramanujam and Tacke's Monetizing Innovation (2016) adds that respondents who rate 5/5 on "would buy" still purchase only about 50% of the time.

Likert rating Stated intent Typical actual conversion
5 (definitely will buy) 100% ~50% (Ramanujam & Tacke, 2016)
4 (probably will buy) 100% ~20%
3 (might or might not) 100% ~5-10% (Chandon et al., 2005)
1-2 (will not buy) 0% ~1-3%

Apply a 0.3 to 0.5 correction factor on top-box scores before you trust a number. A raw "40% would definitely buy" becomes a planning-grade 12-20% after correction.

Exit criterion

Top-box intent above 20% after correction, or landing page conversion above 10%, or an AI panel verdict in the buy zone. Anything lower means no intent signal yet. Go back to Step 2 or Step 3.

Citation capsule: Step 4 measures purchase intent at scale through landing pages, surveys, or AI consumer panels. Because only about 10% of stated intentions convert to actual purchases (Chandon, Morwitz & Reinartz, 2005) and top-box 5/5 ratings still only convert about 50% of the time (Ramanujam & Tacke, 2016), apply a 0.3 to 0.5 correction factor before you call it a go.

Step 5 run a price-sensitivity check

Validated problem and validated interest mean nothing if nobody will pay your number. Step 5 answers "at what price does this still work?" Use the Van Westendorp Price Sensitivity Meter, introduced by Peter van Westendorp in 1976. Oldest B2C pricing method still in active use, takes a day to run.

The four Van Westendorp questions

Ask each respondent all four, in this order:

  1. At what price is this a bargain?
  2. At what price does it start to feel expensive but still worth it?
  3. At what price is it too expensive to consider?
  4. At what price is it so cheap you'd question the quality?

Van Westendorp price sensitivity map with four curves and optimal price point marked

Minimum sample and output

  • Sample: 40 to 100 respondents in your Step 2 audience. Below 40, the curves are too noisy to read.
  • Output: plot the four cumulative curves. The intersection of "too cheap" and "too expensive" is your optimal price point (OPP).

Exit criterion

The OPP sits at or above the price you assumed in Step 1. If it sits below your unit-economics break-even, the product isn't commercially viable for this audience. Either a pricing problem (refine) or an audience problem (move upmarket and re-run Step 2).

Citation capsule: Step 5 applies Peter van Westendorp's Price Sensitivity Meter, introduced in 1976, using four questions to find the price band where buyers accept the product as neither too cheap nor too expensive. Forty to a hundred in-audience respondents produce a usable price map; the OPP must clear your unit economics for the procedure to continue.

Step 6 apply go/no-go decision criteria

Step 6 is a formal review, not a gut call. Lay your Step 1 thresholds next to the outputs from Steps 3, 4, and 5. Count the passes. This is when your decision card earns its keep. CB Insights (2024) found 43% of startup failures traced back to product-market fit; a formal gate here is the cheapest insurance.

The four-pass gate

Gate Pass criterion Source
Problem validated (Step 3) 60%+ interviewees describe unprompted Binary field note gate
Intent validated (Step 4) 20%+ corrected top-box OR 10%+ landing page Chandon-corrected
Price validated (Step 5) OPP at or above planned price Van Westendorp
Audience reachable (Step 2) Still reachable within 2 days Channel audit

Working rule: My four-pass go/no-go gate uses "all four pass = build, one miss = redesign the failing step, two or more miss = kill." This isn't standard across lean-startup literature; it's a rule I'm using on will.it.sell's own idea backlog. The point is to force a decision instead of a feedback loop that never closes. If you've been "iterating" for three months, you've been avoiding Step 6.

Document the decision

  • Write one paragraph with the four numbers and the decision.
  • Date it. Share it with a co-founder or advisor.
  • This is the artefact you revisit if you later question the build.

Exit criterion

You've written and dated a single paragraph with four numbers and one of three verdicts: build, redesign step X, or kill. No paragraph, no decision.

Citation capsule: Step 6 is a formal go/no-go gate mapping Step 1 thresholds onto the outputs of Steps 3, 4, and 5. Four thresholds met produces a build decision; two or more misses produces a kill; a single miss triggers a targeted re-run of the failing step rather than a full restart. CB Insights (2024): 43% of startup shutdowns failed on product-market fit.

Step 7 what to do if the data says no

Most ideas fail here. That's the procedure working. A no-go at Step 6 isn't a dead end; it tells you which assumption was wrong and what to change. Don't treat every failed run as the same failure. They aren't.

Diagnostic questions by failure point

  • Failed Step 3 (problem). The problem is too small or already solved. Look for adjacent problems interviewees mentioned in passing. That's where the next idea lives.
  • Failed Step 4 (intent). The problem is real, but your solution doesn't fit it. The audience may be wrong, or the offer may be wrong. Re-run with a different product shape on the same audience first.
  • Failed Step 5 (price). Unit economics don't work at this audience. Move upmarket or rebuild the cost side.

When to pivot versus abandon

  • Pivot. One axis was wrong: audience, solution form, or price point. You keep the other two.
  • Abandon. The problem itself doesn't have a market large enough to matter. Save the notes, move on.

Archive your failed run

Save the interview notes, the intent data, the price curves. The evidence has value for future ideas in the same adjacent space. More than once, I've pulled a failed run out of the drawer and used the interviews to validate a different product faster.

Exit criterion

You've mapped the no-go to a specific failed step and written a one-sentence "here's what I change next" note. Without that note, Step 7 isn't done, and you're about to re-run the same idea with more optimism.

Citation capsule: Step 7 turns a no-go into useful information by mapping each failed threshold to a specific redesign. A Step 3 fail points to the wrong problem; a Step 4 fail points to the wrong offer; a Step 5 fail points to the wrong audience for the unit economics. Documented failed runs pay forward into future ideas.

Common mistakes in the product validation process

Four mistakes account for most failed runs, and they compound. Hit two in the same week and the data is worse than nothing. It's misleading.

  • Mistake 1: Pitching in Step 3 interviews. The Mom Test's core warning. Once you describe your product, you can't trust any signal that follows.
  • Mistake 2: Sample size below 40 in Step 4. Stated-intent variance swamps any signal below that. Chandon et al. (2005) is the canonical reference.
  • Mistake 3: Using friends and family as respondents. Bias invalidates every downstream step. They want you to succeed; that's not a market signal.
  • Mistake 4: Changing the Step 1 thresholds after the data arrives. Motivated reasoning dressed up as analysis. Your Step 1 document should be dated and untouched.

The one I see most often in my own notebooks: I set a 25% intent threshold, land on 19%, and find myself writing "19% is basically 25% in a small sample." It isn't. That's why Step 1 is first.

Product validation procedure checklist

One-screen summary. Print this. Tape it to your wall. Check boxes as you go.

  • [ ] Step 1: buying decision and go/no-go thresholds written down, dated
  • [ ] Step 2: audience narrow enough to reach five people this week
  • [ ] Step 3: 5-10 interviews completed; 60%+ describe the problem unprompted
  • [ ] Step 4: 40+ responses; corrected top-box 20%+ OR landing page 10%+
  • [ ] Step 5: Van Westendorp OPP at or above your planned price
  • [ ] Step 6: four-pass gate reviewed, decision paragraph written and dated
  • [ ] Step 7: if no-go, failure axis identified and run archived

For the methodology behind AI-based intent measurement, see the science behind synthetic consumer research.

Frequently asked questions

What are the steps of product validation?

Seven steps in order: define the buying decision, define the customer, confirm the problem with five to ten interviews, measure purchase intent at scale, run a Van Westendorp price check, apply go/no-go criteria, and act on the result. CB Insights (2024) found 43% of startup failures were product-market fit failures.

How long does product validation take?

Five to seven working days for a focused solo founder. The bottleneck is Step 3 interview scheduling. Budget $75 to $150 per B2C participant in incentives (Drive Research, 2023) to compress recruitment. Part-time founders typically take two to three calendar weeks. Sequence matters more than speed.

How many customer interviews do you need?

Minimum five, based on Nielsen's (2000) finding that five qualitative participants surface 85% of signal. Ten is safer. Jason Lemkin's SaaStr (2014) "20-interview rule" is the higher bar for pre-build commitment. Respondent quality beats quantity. Five sharp interviews beat twenty vague ones.

What is the difference between product validation and market research?

Market research sizes and describes markets; product validation tests a specific buying decision for a specific product. A single focus group alone costs $10,000 to $30,000 (Drive Research, 2023). Validation is narrower, cheaper, and gives you a go/no-go, not a market map.

Can you validate a product without building an MVP?

Yes. Steps 1 to 6 don't require a working product. Landing pages, surveys, and AI synthetic consumer panels all measure intent before you write code. Validating before you spend the months building is the point of the procedure, not an optional extra. If Step 6 says go, then you build.

How do you know when a product is validated?

When all four thresholds in Step 6 are met: problem confirmed by 60%+ of interviewees, corrected purchase intent above 20% (or 10% landing-page conversion), price-sensitivity band at or above your planned price, and audience still reachable. Three of four is not validated; it's a partial signal. Re-run the failed step.

What if my product is too niche for interviews?

Narrow audiences are a feature, not a bug. They make Steps 3 to 5 cheaper. If you can't find five people to interview in two days, the problem is reachability, which is a Step 2 fail, not a validation fail. Rewrite your audience. See the target audience guide for sharpening techniques.

Conclusion

Seven steps, one working week, a four-pass gate. It doesn't guarantee your idea works. It guarantees that if you run it honestly, you'll know whether to build, redesign, or kill before you've spent the expensive months. Most ideas don't pass. That's not a waste, that's the procedure working on the cheap end.

This is the procedure I'm using to validate will.it.sell's own product ideas right now. I'm building a tool to compress Step 4 from weeks to minutes using AI consumer panels instead of traditional surveys. If you want to try it for free on your own idea while I'm in beta, register an account and send me a message. I'll drop free credits in your account so you can run Step 4 against a simulated panel in about 10 minutes.

For the wider method comparison, see 8 product validation methods ranked by cost and speed. For the Step 4 deep dive, how to measure purchase intent.

Stop guessing. Start knowing.

Your first product validation is free. Get your report in minutes.

Test Your Product Idea Free

We use cookies for authentication and, with your permission, analytics to improve our site. Privacy Policy