← Back to methodology overview
🧪

Lean Validation

Eric Ries · theleanstartup.com

Every business idea is a bundle of assumptions. You assume the problem exists. You assume people will pay for a solution. You assume you can build it. You assume you can reach the right customers. Most of these assumptions are wrong, and the ones that are wrong will kill your business. Lean Validation, drawn from Eric Ries's Lean Startup methodology, says: find the assumption most likely to be fatal, and test it before you invest in anything else.

The Build-Measure-Learn loop is the engine of Lean Validation. You build the smallest possible experiment to test one specific assumption. You measure the results against pre-defined success criteria. You learn whether to continue, change direction, or stop. The goal is speed through this loop. Weeks, not months. Days, not weeks. The faster you learn whether an assumption is true, the less time and money you waste building something nobody wants.

The Minimum Viable Product (MVP) is the most misunderstood concept in the framework. An MVP is not a stripped-down version of your product. It's an experiment designed to generate learning. A landing page with a signup button tests demand. A concierge service done manually tests willingness to pay. A video demo tests interest. None of these require writing a single line of product code. The point isn't to launch something small. The point is to learn something real.

How Distil uses it

Lean Validation is most active during the exploration phase, where Distil shifts from understanding the problem to evaluating the solution. The key question becomes: what's your riskiest assumption, and what evidence do you have that it's true? Distil pushes you to name that assumption explicitly, not hide behind vague optimism.

The framework directly shapes Distil's Idea Maturity score. Distil applies an evidence hierarchy when evaluating your answers. Opinions ("people told me they'd use it") rank lowest. Actions (signups, email captures) rank higher. Money (pre-orders, paid pilots) ranks highest. Repeated use and referrals are the strongest possible signal. If your evidence is all opinions, your maturity score will reflect that.

During the deep dive, Lean Validation continues to influence the conversation. If you claim traction, Distil asks what your success metrics were and whether you defined them before the experiment or after. Retroactive goal-setting is a common trap. If everything about your idea seems to be "working," Distil will challenge whether you've been testing hard enough.

Key principles

  • Identify your riskiest assumption before building anything. If you can't name it, you haven't thought hard enough about what could go wrong.
  • An experiment without a success metric is just a demo. Define what "pass" and "fail" look like before you run the test. Not after.
  • Opinions are not evidence; behaviour is evidence. What people do matters infinitely more than what they say they'll do.
  • If everything is working, you're not testing hard enough. The questions you're afraid to ask are usually the most important ones.
  • Pivot or persevere, but never ignore the data. When evidence contradicts your belief, update the belief, not the evidence.
  • Speed of learning matters more than quality of product. A rough experiment that teaches you something real in a week beats a polished prototype that teaches you nothing in three months.
  • Every feature is a bet. Treat it that way. What assumption does this feature test? What happens if the assumption is wrong?

Common mistakes

  • Building an MVP that's actually a full product. If your "minimum" viable product took six months to build, it's not minimum. Strip it down further. What's the one thing you need to learn?
  • Testing the easy assumptions first. Founders often validate the things they're already confident about ("people use phones") while ignoring the scary unknowns ("people will pay £20/month for this").
  • Moving goalposts after the experiment. If your landing page got 2% signups and you originally said 5% was the threshold, that's a fail. Don't retroactively decide 2% is "actually pretty good."
  • Confusing activity with validated learning. Writing code, attending conferences, and posting on social media all feel productive. But unless they test a specific assumption, they're just busywork.
  • Pivoting too quickly or not quickly enough. One bad data point isn't a reason to pivot. Three consistent signals that your assumption is wrong are. Learn the difference.

Recommended reading

  • 📖 The Lean Startup by Eric Ries. The foundational text on validated learning, MVPs, and the Build-Measure-Learn loop.
  • 🌐 theleanstartup.com. Official resources, case studies, and community.

Ready to test your riskiest assumption?

Distil will help you identify what to test and evaluate the evidence you have.

Test My Idea