Member-only story

Why You Should Build Every New Product Feature Like an MVP

Joe Procopio
6 min readApr 11, 2019

--

photo by Matthew Henry

Making a mistake launching a new product feature is costly, and I’ve done it at least a dozen times. Never again.

We’re all familiar with the Minimum Viable Product (MVP) strategy. It mandates that we “fake” components of a new product by making many of its processes manual at first release. We do this for a few reasons:

  1. We want to get our product idea out to customers as quickly as possible, so we can validate its reason for being.
  2. We want to discover where the product is going to break, so we can focus our limited time and resources on de-risking.
  3. We want to determine how our customers are going to accept and use the product, so we can build out those features first.

There are other valid reasons for adopting an MVP strategy, like getting to revenue as quickly as possible, but those are the big three.

MVP isn’t a new concept, per se, but its adoption has exploded with the lowered barriers of entry brought about by the Internet and Software as a Service (SaaS).

What’s gaining traction now is the strategy of repeating the MVP process with every new feature, even down to every new version.

How do we do that?

Soft Launching and A/B Testing

There are already a number of ways to test the viability of a finished feature.

We can soft launch or A/B test by singling out a certain small segment of our customer population, turning on the feature for them, and either following the data or contacting them directly to see how they respond.

We partition that customer segment by usage, engagement, location, demographic, basically any dimension that corresponds to the thesis we’re trying to prove out. If the new feature is there to reduce clicks, we’ll choose most frequent users. If the new feature is an add-on for revenue, we’ll choose most engaged users, and so on.

This testing is done for a couple of reasons. In the case of a soft launch, we want to avoid disaster, and making mistakes with a smaller audience is preferable to making mistakes at feature launch. When we A/B test, we’re trying to choose between options…

--

--

Joe Procopio
Joe Procopio

Written by Joe Procopio

I'm a multi-exit, multi-failure entrepreneur. AI pioneer. Technologist. Innovator. I write at Inc.com and BuiltIn.com. More about me at joeprocopio.com

Responses (3)

Write a response