Your MVP is your first impression, both for your startup and for you as an entrepreneur.
I wrote a piece two weeks ago about the increasing number of trash MVPs coming to market and what the “minimum” part of MVP should actually mean. I got a lot of responses back asking: When do we know we can launch our MVP without courting disaster?
We’re always going to be a little nervous at launch. We should be. But we can’t wait forever, so here’s a high-level checklist I use to give my MVP its best chance, with some real-world examples thrown in.
- Make sure everyone is aware of our MVP’s goals.
We’re doing an MVP to test the basic hypothesis of our product. To put it bluntly, our MVP is trying to determine whether or not our product deserves to exist.
We should launch with core features only, those that are going to prove our hypothesis. Core features should run exactly as they will in the actual product, but any supporting features should be manual, faked, or just plain turned off.
We do this for two reasons. 1) To get to market as quickly as possible and 2) To reduce the noise those additional features will add to our test results.
At my current startup, Spiffy, we launched an OBD-II reader (goes in the little port down by your steering wheel), to allow customers of our app to be able to read whatever code lit up their check engine light. Neat. Simple.
There are dozens of implications and plans we have for this new product, and some of them are now underway, but the MVP had to do one thing: Read the code. That alone was an incredibly complex task, which we only fully realized during the MVP launch itself.
2. Define success, failure, and what to do when the MVP lands between the two.
Success means X many of our customers use the MVP this often, in these ways, for this long, and are this engaged with it. Failure means the opposite.
We’ll also need a plan for what to do next when our MVP neither succeeds nor fails. What did we miss? What do we tweak? Do we do another MVP? How much time do we spend manually polling and surveying customers?
Here’s a real-world example of that. At my last startup, Automated Insights, we automatically created written stories from data. For our MVP, we stood up over 800 websites with robust stories on every pro and college team, game, and player in the big three sports, based on nothing but the stats. Man, that was fun.
Our MVP was a huge technical success, but usage wasn’t where we wanted it. Following up, we discovered that we had completely overstated the need for this content at the pro and most of the college level, as almost all those teams already had at least one beat writer covering them. Then we realized that a small number of smaller colleges didn’t have any writers, and their engagement was off the charts.
That told us that we needed to aim where humans couldn’t and wouldn’t write, not where they were already writing. The size of the audience never mattered, it was the level of engagement that counted. Our first real client was Yahoo Fantasy Football, which is the exact definition of where humans couldn’t write but the engagement was stratospheric.
3. Test the shit out of it.
Before the first customer sees our MVP, we should go through every use case, every edge case, and every crazy thing we think our customers will do out in the real world.
We shouldn’t just make sure the MVP doesn’t break. It should break, and we need figure out the likelihood of that happening and what we’ll do about it.
We also need to create a system that allows us to keep testing during and after launch. That means we should make sure we can track and measure everything from who and where our customer is to how they found us to how long it took them to onboard to how often they use the product and for what.
Since our MVP is going to break, we’ll need to have eyes on the usage at all times, so we’ll need a feedback loop and one or more monitors. Then we’ll need the technical architecture to be flexible enough so we can turn some or all of our MVP on and off. We should be able to isolate a user, drop a feature, turn off a subset of users, deactivate certain functionality, etc. without having to wipe out the whole test.
But if for whatever reason we need to stop the whole thing, we’ll need a kill switch too.
Once we’re prepared for the worst, we need to make sure when our MVP does what it’s supposed to do, it does it correctly, efficiently, and beautifully, every time.
4. Bring in friends and family.
Call it a beta. Call it a pilot. We’ll find people we already know who are like our target market. We’ll turn them into customers, and by that I mean they should actually be customers.
We’re not choosing them because they’ll be nice to us, we’re choosing them because they’ll talk to us. So make them buy the MVP through the proper channels (reimburse them, of course). Then make sure they get started. Then let them fail or succeed on their own.
We’ll want to localize our MVP market as much as possible. If we can stick to one location, great. If we can choose one industry, awesome. But none of that is necessary, as long as we scrounge up as many MVP customers as we can afford.
Going back to our OBD-II sensor at Spiffy. We actually gave them away, in our home city, and only with a premium service. This way we knew we were hitting our target market and we knew they wouldn’t mind if we reached out.
We didn’t include any special treatment, like a secret support number, but when something did go wrong, we bent over backwards to fix the problem. This gave us a huge opportunity to learn as much as we could.
5. Make the Call
Earlier I alluded to asking ourselves if we should run another MVP cycle when the test results are inconclusive. I added this question because I hear it a lot, but the answer is almost always no.
In rare cases, an MVP is a smashing success out of the gate. In some cases, the MVP is a disaster. But in most cases, we fall somewhere between success and failure.
That said, unless some freak external element has thrown our MVP run way off course, the choice we end up with is usually binary: Throw everything we’ve got at making a run with this product or stop now and never talk about it again.
So even with all that testing and preparation and data, we still may need to go with our gut.
Two years ago, I killed one of my startup ideas even though the MVP was mostly successful, I had funding lined up, and there was a string of interested parties wanting to get involved. But I couldn’t shake the fact that the metrics didn’t hit my numbers and the product, honestly, just didn’t hit the right notes.
I pulled the plug before anyone got hurt.
But I’ve been on the other side, when I would have deemed an MVP maybe a light failure. The CEO, who was the CEO because he had more vision and guts than me, saw the same results but gave it the green light anyway, and it became a huge success.
It’s easy to say that an MVP either works or it doesn’t, but in real life there’s way more nuance than that. In the end, we entrepreneurs have to make the call. And that’s fine, because that early in the game, we’re also the ones who have to live with the results.