Contents

When MVPs go wrong

Contents

Most startups operate on the practice of delivering Minimum Viable Product or MVP to customers. The MVP has the bare minimum of features needed to achieve a goal or attack a target market. You get out code, features, functionality and fixes fast to customers to demonstrate the value of your product or service. The MVP concept is awesome and I’ve become a real convert to it. You get the product to deployment, get it in the hands of customers fast and maximize the crucial return on investment of your precious investment or starting capital.

Done right the MVP model is dynamic, iterative, fast to deploy, gets quick feedback and fearless. The best MVP comes out from a tightly run and managed product management process. Each feature of your MVP product emerges from a user scenario/script, good UX/UI design and pragmatic architecture and design decisions.

Sadly, done wrong and it can be a disaster. This post highlights some of the key things that can potentially go wrong with an MVP and provides some suggestions on how to avoid those land mines.

So how does an MVP go wrong? Well in my humble and recent experience there are three principal failure modes for an MVP:

  1. Shipping

  2. “Ostrich Syndrome”

  3. Quality

Shipping

The first failure mode, shipping, is pretty simple. It happens when you don’t get off your arse and ship the damn product. The idea and the true value behind a minimum viable product is that you iterate, ship fast and iterate again based on customer feedback. Holding onto the product for “just one more feature” not only keeps you from the market but it also potentially digs any architectural, technical or UX/UI holes you’ve made for yourself deeper. Or potentially allow your whole product to fall off a cliff.

Failing to actually get code out and product in front of customers can be recognized when you hear comments like:

“We just need to close these feature tickets out and…”

“I know it’s the third time the ship date has slipped but…”

“At our burn we have three months of money left and we’re going to ship two weeks before that… It’ll be fine.”

The only solution to shipping failure is discipline. Smooth out rough edges by all means but don’t be tempted by feature creep. Set deadlines and met them. Not only does it get your product out into the world but it teaches your Engineering team the discipline they’re going to need to ship regular and solid product.

“Ostrich Syndrome”

One of the key points of an MVP is to seek feedback from customers. Get a product out there that customers can use and see if this triggers interest or causes the market to poke back and validate all your hard spent Engineering and Product hours. Failing to seek that feedback constitutes a failure of the MVP as does failing to seek the right sort of feedback. Seeking but failing to heed that feedback also constitutes a failure of the MVP and makes you an idiot as well.

Ostrich Syndrome can be detected when you hear comments like:

“I know our real customers will want to use it like.”

“She’s definitely not our target demographic.”

“I think we can assume.”

The solution to Ostrich Syndrome is simple: build feedback systems with your product, actually gather the relevant data and then use it make decisions. Building feedback systems is pretty easy:

  • Embed feedback mechanisms in the product. The easier, simpler and lowest barrier to use the mechanism is then the more likely someone will actually give you feedback.
  • Like/Dislike buttons
  • Links to feedback forms
  • Tweet/Re-tweet shortcuts
  • Inbuilt diagnostic/anonymized data output if feasible
  • Use electronic surveys and customer calls/interviews to gather more detailed data
  • Monitor social networking, media and blog channels for people’s feedback. This is the hardest sort of feedback to deal with as a lot of it is heavily opinion-orientated. But detect a trend of opinions and you may have tapped a vein of useful feedback.

Then actually devote the time to reviewing and interpreting the data. Divide the feedback into types or categories and track it. To do this you can use a ticketing system or a tool like Pivotal Tracker. A spreadsheet or document can be suitable too or if it suits better a visual representation such as file cards pinned to a wall or stuck to a whiteboard.

Finally, be honest with yourselves about the product and the feedback. And if ultimately you do choose to discard some data make sure you document why so you avoid future rabbit holes.

P.S. Of note is the correlated problem: striking the balance between listening to customers and having one or two vocal customers dictate your product strategy. Obviously if you have some highly engaged customers then that’s awesome but you need to temper their input against a wider pool if you don’t want your product potentially pigeon-holed or a bespoke solution that suits those customers and not the wider market. Favor feedback mechanisms that generate pattern, trend or themed data as well as point information. Patterns, trends, and themes help validate point opinions.

Quality

The last failure mode comes from a misunderstood conception of the MVP:

1
    MVP != Quality

Ensuring quality before you release might potentially seem like an inherent contradiction with the first failure mode of an MVP, a lack of shipping. It’s really not. There is a clear difference between shipping something usable and something crap. If a user can’t consume your MVP product because it’s buggy, unusable or fragile then you’re either going to get zero feedback or feedback that highlights its flaws rather than feedback on its capabilities. And naturally people who can’t get your product to work at all or find it too hard to use are unlikely to be persuaded to try it again even if you do fix it.

You’ll see the harbingers of a poor quality MVP if you overhear:

“We’ll fix the UI bugs next release.”

“Don’t worry! It only crashes under load.”

“We don’t have time to test the upload capability.”

So how do you fix enough bugs to ensure people test but not so much that you’re spending more time fixing bugs than actually releasing product? Testing the right things. Especially getting acceptance and scenario based testing right. You know those product plans you have for your MVP? The user scenarios you used to create each feature of your MVP? They are your test scripts. You used your product management process to generate user scenarios and develop features from them. Use them again to generate and execute test plans for each of the portions of the scenarios you’ve developed.

This isn’t rocket science and it’s more than just writing unit tests. Take the user scenarios you’ve developed and execute them: if the feature calls for uploading a file then upload file(s). Treat the product like you’re expecting the user to use it and then test it. If there are bugs fix them. There’s no excuse for writing any feature and then not using that feature internally exactly as you think the end user is going to use it.

Summary

So is this advice worth anything? Well perhaps. I’m not a developer by trade but I’ve played product manager, release manager and hack-n-slash developer over the last few years. It’s based on my experiences over that period and what I’ve learnt. Mostly from stuffing up. Hopefully you’ll avoid some of the mistakes I’ve made.