Skip to content
← Writing
Steven Henty··12 min read

The discipline most founders skip

The discipline most founders skip

We had everything in place. The channel partners were lined up and briefed. The systems were ready. We'd spent months building this Wizard of Oz service where we'd do everything behind the scenes, white glove, one customer at a time, while the app pretended to handle it all. We'd rehearsed the handoffs. We'd prepared for volume.

The day we went live, I remember checking my phone. Then checking it again. Refreshing the dashboard. Waiting for the first call. It didn't come. Neither did the next day's. By the end of the first week, nothing.

Crickets.

We were expecting to be inundated. Our channel partners had been adamant: the demand is there, the customers are ready, you need to move now. We felt real pressure to deliver. The reality was nothing like that. Not a trickle that needed time to build. Not a slow start with promising signals underneath. Just silence.

But instead of asking what that silence meant, we kept banging away at the channel. Maybe we need different messaging. Maybe we need more partners. Maybe if we just tweak this one thing. There was always one more thing to tweak.

Six months later we were still tweaking. Still pushing. Still explaining to ourselves why the customers were just around the corner. The whole thing should have died in week three. The writing was on the wall. We just didn't want to read it.


Why this keeps happening

This isn't an unusual story. If you've built anything, you'll recognise it.

The problem is that building is addictive. It is so easy to get swept away by the conviction that you've got something everybody's going to want. Easy to get pulled and seduced by the need to add just one more thing. Before you know it, the lost days turn into weeks, then into months.

And when the results don't come, that same conviction turns into explanation. We've invested too much to stop now. If we just tweak the approach, the customers will come. There's always one more thing to try. It only takes one person who hasn't committed to what "enough" looks like, and every week the goalposts move a little further. Nobody has to admit they've moved.

The pattern is always the same. Intuition over evidence. Conviction over curiosity. Not that intuition and conviction are wrong. They're essential. But they need to be balanced with evidence and curiosity, and most of the time they aren't.

Teresa Torres, who wrote Continuous Discovery Habits, puts her finger on one version of this: teams interview customers to confirm what they already believe, rather than to discover what they don't know. The research happens. The conversations happen. But the question going in is "am I right?" not "what don't I know?" And so the evidence gets shaped to fit the conviction, rather than the other way around.


What Disciplined Entrepreneurship gets right

Bill Aulet's Disciplined Entrepreneurship is the best antidote I've found to this pattern. The book lays out 24 steps for building a startup, but the principle underneath all of them is simple: the undisciplined founder wastes time being optimistic about the wrong thing. The disciplined founder earns the right to be optimistic because they've verified something real. That's not pessimism. It's just not wasting time.

The concept that sticks with me most is the beachhead market. Your market is not "everyone." You cannot validate against "everyone." A beachhead market is a deliberately small, winnable segment. The one place you plant your flag first and take completely before you expand.

Choosing a beachhead forces you to be specific about who the customer actually is, what they actually need, and what a real win looks like. Without that specificity, no amount of customer research will give you a clear signal. You'll just be collecting noise and calling it data.

Aulet is also explicit about something most founders would rather skip: there's no substitute for talking to actual customers. Reports, desk research, analyst opinions: none of it counts. Direct contact. Interviews. Observation. Prototype testing. The customer is the most important element of the entire framework, and all 24 steps begin and end with understanding a specific customer in a specific context.


Kill criteria: the discipline that changes everything

Here's the thing that would have saved us six months on that experiment: kill criteria written before we started.

Kill criteria are the conditions under which you stop. Not "we'll see how it goes." Not "if we don't see traction." Specific, measurable, pre-committed conditions. If X doesn't happen by date Y, we stop. No renegotiation.

The reason they have to be written before the experiment starts is the same reason Ulysses had his men tie him to the mast before the ship reached the Sirens. He knew that once he could hear the song, he would want to change course. He would beg, plead, rationalise. So he pre-committed. He bound himself while his judgment was still clear, precisely because he knew his future self would try to wriggle free.

That's the Ulysses contract. And it's exactly what kill criteria are: a pre-commitment made when you have clear judgment, designed to bind your future self when your judgment is impaired by sunk cost and emotional investment.

Part of what makes this hard is that you can never truly prove nobody wants what you're building. There's always another segment to try, another message to test, another door to knock on. That ambiguity is the trap. Because if you can never prove a negative, you can always keep going. Kill criteria don't solve that philosophical problem. What they do is decide in advance how much evidence of absence is enough. You're not trying to prove nobody wants it. You're agreeing on what "enough signal to stop" looks like, before you're too invested to see it clearly.

Without kill criteria, what happens is predictable. The experiment underperforms. The team discusses it. Someone says "we should give it more time." Someone else says "maybe we should tweak the approach." Nobody says "we should stop," because stopping feels like failure and nobody defined what failure actually looks like. So the goalposts move, the timeline extends, and six months later you're still banging on doors that nobody is answering.

With kill criteria, the conversation changes completely. "We said we'd need 50 sign-ups by March. We have 3. We stop." There's nothing to renegotiate. The decision was made when the thinking was clear.

This matters just as much if you're building alone as it does if you have co-founders. Solo, kill criteria are a commitment to yourself, a way of holding your future self accountable when conviction starts to cloud the evidence. With co-founders, they do something equally valuable: they create alignment before the pressure arrives. Everyone agreed on the conditions when nobody was emotionally invested. When the moment comes, there's no ambiguity about what you all said you'd do.

If you're on a product team, you'll notice that kill criteria look a lot like key results. They should. The best OKRs measure outcomes, not outputs, and kill criteria are the same thing applied to experiments: not "did we ship it?" but "did anyone care?"

Vague criteriaKill criteria
"If the numbers look good we'll continue""50 sign-ups by March 15 or we stop"
"We'll see if there's traction""3 paying customers in 60 days or we pivot"
"If customers seem interested""Fewer than 10% of pilot users return in week 2: kill the feature"

The left column gives you permission to keep going forever. The right column forces a decision.


The evidence hierarchy

Not all signal is equal. This is something I wish I'd understood more clearly earlier.

The weakest signal is explicit demand. Someone telling you "I would use this." People say that all the time. It costs them nothing to say it. Surveys, focus groups, interviews where someone says "yes, I'd definitely buy that": this is the least reliable data you can collect. It feels good. It feels like validation. It isn't.

Go back to that opening story. Our channel partners were adamant. The demand is there, the customers are ready. But that was their read of their customers, secondhand and filtered through their own interests. They weren't lying. They believed it. But belief isn't the same as evidence, and their customers' stated intentions weren't the same as actual demand. When we opened the doors, the customers didn't show up. The channel partners' conviction was no substitute for going and finding out directly.

Stronger signal comes from behaviour. Clicks. Sign-ups. Payments. Someone actually doing the thing, not just saying they would. The gap between what people say they want and what they actually do is enormous, and most founders are gathering the wrong type of signal.

The strongest signal, and the hardest to see, is latent demand. Adjacent frustration. Compensating behaviours. Workarounds that people have built because the thing they actually need doesn't exist yet.

The iPhone is the most cited example of this, and with good reason. Before 2007, nobody was asking for a device that combined a phone, an iPod, and an internet browser. The explicit demand wasn't there. But the latent signal was everywhere: people carrying three devices. People hacking their phones to do things phones weren't designed to do. The frustration was real. It was just expressed through behaviour, not words. Apple read that signal. Behavioural rather than stated preference. They looked at what people were doing, not what people said they wanted.

Study after study finds that the majority of shipped features are rarely or never used. The exact figure is debatable, the pattern is not. Most of what gets built didn't need to be built. The signal was there; it just wasn't the signal anyone was looking for.


The loop, not the gate

The most common mental model mistake is treating validation as a phase. Something you do before you build, a gate you pass through once and then you're free.

I used to think that way too. Traditionally, the build stage would be delayed as long as possible until we could really validate. And that made sense when building was slow and expensive.

Here's the counterintuitive part: now, with vibe coding and rapid prototyping, the gap between discovery and build is collapsing. You can get something into the hands of users in days, not months. And it's tempting to think: what have I got to lose? Just build it, see if people want it, move on.

The problem is that "just build it" is still a bet. You build the thing in a few days, discover nobody wants it, build another thing, discover nobody wants that either. Before you know it, six months have gone and maybe one of them worked. You're in exactly the same place as if you'd spent six months building the wrong thing the slow way. The only difference is that you mistook activity for progress the whole time.

The bottleneck is no longer the building. The bottleneck is the validation. Speed doesn't change that. It just makes it easier to avoid facing it.

Both things are true at the same time. Build fast. Validate fast. Get it into the hands of users as quickly as possible. Talk to users as much as possible. But the loop is still there; it's just got tighter. The discipline isn't in delaying the build. The discipline is in staying honest about what the evidence is telling you while you build. Building without that honesty is just moving faster in the wrong direction.

Torres calls this continuous discovery: the idea that discovery is not a phase but a weekly habit, something teams do alongside building, not before it. Her opportunity solution tree forces you to name the assumption you're testing and, if the assumption is wrong, prune the branch. You don't renegotiate; you move on.

The validation loop never ends. Every feature request starts with a problem worth solving. Every pivot goes back to the beginning. Every new market segment means asking the same questions again. The loop doesn't stop when you launch. It doesn't stop when you find product-market fit. It just gets tighter.


What this means for how you work

Before your next experiment, three things.

Write kill criteria before you start. Specific conditions, measurable outcomes, a date. "If X hasn't happened by Y, we stop." Write them down. Share them with the team. And when the results come in, let them inform the next experiment, not rewrite the rules of this one.

Question what type of signal you're collecting. If your validation is based on people telling you they'd use something, that's the weakest signal available. Look for behaviour instead. Look for workarounds. Look for adjacent frustration, the things people are already doing that tell you there's a real problem underneath.

Treat validation as a loop, not a gate. You're never done. The question is never "have we validated?" It's "what are we validating right now?" Every feature, every experiment, every new direction goes back to the same question: is there a real problem worth solving?

Building is more fun than validating. It always will be. And now that building is faster and easier than ever, the temptation to skip the discipline is stronger than ever too. But the skill of defining exactly the core value of what you're offering and testing whether anyone actually wants it: that skill hasn't become less important. If anything, it's become more important, because now you can waste time faster than ever before.

The discipline most founders skip isn't complicated. It's just uncomfortable. It means writing down the conditions under which you'll stop before you've fallen in love with the idea. It means looking for evidence that you're wrong, not just evidence that you're right. It means treating silence as an answer, not as a problem to be solved with more noise.

That experiment we ran? The one that should have died in week three? The evidence was there. We just weren't looking for it. Or maybe we were, but we didn't want to see what it was telling us.

Next time, I'll tie myself to the mast first.

Coming soon: why surveys lie and what to do instead.

← Back to Writing