A large Finnish company once spent over a million euros and the better part of two years building a digital service that had never been validated with a real customer. By the time the launch failed, three things were obvious: the problem they thought they were solving was much smaller than they'd assumed, the people they'd built it for didn't behave the way the personas predicted, and the pricing model couldn't survive contact with the market.
What's striking isn't that the project failed. Plenty of projects fail. What's striking is that every assumption that turned out to be wrong could have been tested in a focused few weeks, before the build started, for a fraction of one sprint's cost.
Nobody tested them. The build started anyway.
This is the most expensive mistake in product development, and it's the most repeatable one.
Every product decision is a bet
Every strategy, every roadmap, every feature decision rests on a stack of assumptions. Assumptions about the market. Assumptions about the customer. Assumptions about what they'll pay for. Assumptions about the competition, the channel, the technology, the team.
Most organisations treat these assumptions as facts. They're not facts. They're bets.
Roger Martin has written about this for years. You cannot prove a strategy correct in advance. You can only commit to it and find out. Which means the rational response to any big product decision is the same response you'd give to any other bet worth real money: reduce the downside before you place it.
That is what concept validation does. It reduces the downside.
Not by guaranteeing success. Nothing guarantees success. But by making sure that when you commit the budget, the headcount, and the time, you're committing to something that has a fighting chance.
The real cost of skipping validation
When a product fails in the market, the financial loss is the visible part. It's rarely the biggest part.
The bigger cost is momentum. When a launched product underperforms, teams don't calmly gather evidence and adjust. They defend what they built. They spin. They blame the market, the timing, the sales team, the pricing. The original assumptions never get revisited because admitting they were wrong feels like admitting the whole effort was wasted.
The even bigger cost is organisational confidence. After one bad launch, leadership becomes cautious about the next innovation. Investment in new products slows down. Risk aversion creeps into every review meeting. Two years later the company wonders why they can't ship anything new.
All of this flows from one decision: skipping the cheap step that would have caught the problem early.
What is concept validation, really?
Concept validation is the practice of testing your riskiest assumptions before you commit to building the final thing.
It lives at the intersection of design, product, and business leadership. It's a design practice, because the tools (prototypes, interviews, experiments, mockups) come from design disciplines. It's a product practice, because it shapes what gets built and what doesn't. And it is, most importantly, a business leadership practice, because it determines where the next million euros of investment actually goes.
Anyone who calls it just a design method undersells it. Anyone who calls it just business strategy misses where the tools come from. It's all three, and the companies that do it well treat it as a joint responsibility between product, design, and leadership.
The three things you validate before you invest:
Desirability. Do people actually want this? Not "would they use it if it were free," but "is there a real problem here that they're willing to pay to solve?"
Viability. Does the business model work? Can you reach these customers profitably? Does the pricing cover the cost of serving them? Does the unit economics survive at scale?
Feasibility. Can you actually build and operate it? With the team you have, the budget you have, the timeline you need?
Service design is often the strongest validation tool Nordic companies already have in the house. It handles desirability well and, done properly, takes a serious run at feasibility through service blueprints and operational mapping. The leg that usually stays weakest is viability: whether the business model actually captures value, whether the pricing survives contact with a customer's willingness to pay, whether the unit economics hold at scale. Most service design projects touch viability in a slide or a workshop exercise. Few pressure-test it. That's why a well-executed service design project can still lead to an investment decision that turns out to be wrong: the stool had three legs on paper and one short one in practice.
Real concept validation covers all three.
Why AI makes validation faster, not optional
Something changed in the last two years, and most leadership teams haven't updated their mental models yet.
The shift isn't in clickable prototypes — those have always been a matter of days, mostly spent drawing screens. The shift is in working prototypes: interactive software with real logic and data, the kind that used to require a developer and two sprints. That now takes one person one day. A landing page with five variations and a working ad campaign takes a few hours. Ten customer interview transcripts can be synthesised into themes in the time it takes to make coffee. Interactive mockups that look and feel like real software can be generated from a brief.
This has one obvious implication and one that is less obvious.
The obvious one: validation is now so cheap that skipping it is no longer a budget decision. It's a discipline decision.
The less obvious one: the excuses for skipping it have evaporated. "We don't have time to prototype" used to be a credible objection. It isn't any more. "We don't have the designers" doesn't hold either. The tools are available to anyone who takes the time to learn them.
I keep a small AI portfolio of dummy prototypes that show what this looks like in practice. Most of them are stripped-down versions of real client work I can't publish as case studies for NDA reasons, rebuilt with fake data so I can show the mechanics without breaking confidentiality. You can look through them at /ai-portfolio. The point is not the prototypes themselves. The point is how fast something concrete can exist for a team to react to.
AI doesn't remove the need to validate. It removes the excuse.
"But can't we just ship to production and test with real users?"
This is the argument I hear most often now, especially from leaders who've read about tech companies shipping dozens of changes a day. The logic goes: in the AI era, just code fast, get it into production, and learn from real user behaviour on real data.
It's a reasonable-sounding argument. It's also wrong for most organisations most of the time. Here's why.
Alignment before code, not after. In a large organisation, "ship fast" is not a technical problem. It's a political problem. When you ship to production without internal alignment, every stakeholder (sales, legal, compliance, marketing, data, customer service, finance) finds their issue only after the product is live. A prototype is the cheapest way to get all of those issues on the table before they cost millions to fix. The prototype is an alignment tool at least as much as a validation tool, and in enterprise contexts the alignment value often matters more than the customer feedback.
You're validating business value capture, not just customer desire. "Users love it" is not a business case. The real job of a prototype inside a large organisation is to test whether the business can actually capture value from this idea: whether the channel exists, whether the pricing model is workable, whether support can scale, whether operations can sustain it, whether the legal structure permits it. You don't see any of this from a production version, because by the time you're in production you are already committed. The prototype surfaces the business-side objections at a point where they can still change the direction.
Production data answers the wrong question. Production data tells you what users did on the path you assumed. It doesn't tell you why, and it doesn't tell you what they would have wanted instead. Concept validation questions are qualitative at their core: what problem, for whom, why does it matter enough to pay for. Those questions don't get answered by an A/B test in production. A/B tests are precision instruments. They give you better answers to the question you already know how to ask. Concept validation helps you find the right question in the first place. Confusing the two is the single most common mistake I see in product leadership.
A prototype is a roadmap tool. One well-run prototype gives you months of directional clarity on the roadmap. It tells you what to build, what to skip, what to revisit later, and what order to do it in. A poorly validated feature that ends up in production forces you to tear it out and rebuild, and the rebuild always costs more than the prototype would have, in both calendar time and team confidence. The prototype is a fraction of that cost and gives you the same learning earlier.
"Ship and test" works, but only when you already know what you're testing. This is the nuance that gets lost in the debate. If you have a strong hypothesis and you need precision (is the conversion rate 2% or 3%, does the green button outperform the blue one), then yes, an A/B test in production is the right tool. If you're uncertain about direction (should we build this at all, for this segment, at this price), then a prototype is the right tool. Leaders regularly reach for the precision tool when they need the direction tool, and they end up with high-confidence answers to the wrong question.
None of this is an argument against speed. I am obsessed with speed. It's an argument for using the right tool at the right level of uncertainty.
What concept validation is NOT
It is not PowerPoint personas painted in pastel colours and then filed away.
It is not a prototype that looks beautiful and answers no question.
It is not "let's build an MVP and see." An MVP without a hypothesis is just a smaller version of the same gamble, and smaller gambles are still gambles.
If a validation exercise cannot be summarised by "here was the question, here is what we learned, here is what we'll do differently," it wasn't concept validation. It was expensive theatre.
Five lessons from doing this for two decades
1. The question comes before the prototype. If you don't know what question you're trying to answer, no prototype will help you. The question is the work. The prototype is the cheap part. Start by writing down the one thing that, if it turns out to be false, means the whole idea collapses. Then design something that tests exactly that.
2. Five interviews beat a hundred-page spec. I once worked with a company that had spent months and a significant budget writing a detailed specification for a new product. Five customer conversations revealed that the core problem definition was wrong. The spec was technically excellent and commercially irrelevant. The speed of learning from conversations with real humans is still the most underused superpower in corporate product development.
3. AI doesn't remove the need to validate. It removes the excuse. Every objection about cost, time, and resourcing that used to make validation hard has been removed by current tools. Leaders who still skip validation now are making an active choice, not a budget-forced one. Own the choice.
4. Validate your riskiest assumption first, not your easiest. Most teams test what is convenient to test. This is backwards. Start with the assumption whose failure would be most expensive, and design the cheapest possible test for it. If that test kills the idea, you've saved the rest of the budget. If it survives, you've earned the right to keep going.
5. Test before you commit, and apply it to your own business too. The same discipline that works for client product launches works for your own service offering, your own pricing model, your own new hire, your own market entry. "Test before you invest" is not just a product methodology. It's a way to run a business.
Many ways to validate. Pick the right one for the question.
There is no single correct method. Different risks demand different tests.
- A landing page and a small ad budget tells you whether demand even exists.
- Five to ten customer interviews tells you whether the problem is real and painful enough.
- A paper prototype or mockup tells you whether the concept is understandable at all.
- A Wizard of Oz experiment tells you whether the workflow works before you build it.
- A pre-sale or pre-order tells you whether people will actually pay, not just say they would.
- An A/B test on existing traffic tells you which of two versions performs better.
- An AI-built interactive prototype, see examples in the AI portfolio, doubles as an alignment tool that works in days, not weeks, and lets both internal stakeholders and real customers react to something concrete.
The cheapest mistake in product development is the one you catch before the budget clears. The most expensive is the one you discover six months after launch. The method follows the question. The question comes first.

