MVP isn't just a buzzword

N.B. This post was migrated from oli-hall.github.io to oli-hall.com on 18/04/2019

If you've been around in startups, or browsed HackerNews, you've probably come across the term MVP. Standing for Minimal Viable Product, it's a term used to convey a host of ideas around building as little as possible, and getting things in front of customers. The concept is that you need to gather feedback on your product idea ASAP, ideally before you've written a line of code. That hypothesis about people having problem X? Put up a landing page with a promised solution to X. You can throw something decent together in a few hours with modern tools, and have a quick email sign-up box. If someone's gone to the length of clicking on your page and entering their email, chances are that there's a real need for your solution. That way, you get immediate feedback if your problem is a genuine issue worth solving, or not worth your time. This approach can be extended to the whole product, building it out only as you need it. Got a customer? Cool, manually create their account for now. They want to pay you for your product? Sweet, now's the time to think about that payment system you were going to engineer. I've seen a lot of this concept through reading IndieHackers - a site dedicated to folks building out side projects and small businesses.

This approach extends not only to prototyping new ideas and starting new businesses, but to building features for existing products. Adrian Howard spoke about applying this mode of thinking to product development recently at the amazing LeadDev conference: Points Don’t Mean Prizes. It's a great way to avoid the temptation to build out an entire product before getting the feedback that will inevitably result in changes.

Pride before a fall

I was fairly aware of such ideas, and thoroughly approved of the concept of the MVP. As such, it came as a surprise when I managed to stumble into exactly this issue at my current job. I was brought on as the first engineering hire at a Biotech startup (that's a topic for another post), and was immediately tasked with architecting a data platform to store the data generated by the company's many experimental protocols. After a lot of discussions with the CSO, CEO and various members of our science team, I came up with what seemed to everyone like a sensible design, backed by Google Cloud Storage (GCS). After a couple of weeks I had it working nicely as a standalone Python service, running on Google AppEngine.

After releasing that initial version, we had a lot more discussions as a team fueled by that prototype, and realised that now that we had a system to capture data from our processes, it would be amazing to be able to optimise the processes as well. We could take advantage of ML and statistics to test multiple possible setups for a given process, searching through the possible ways of running it to determine which is the most efficient or highest yield. This required an extensive rewrite, adding more components, more state, and increasing the complexity significantly. We managed to complete this after a couple of months (albeit with various revisions from further discussions along the way), and released it internally to much internal fanfare.

However, we had a problem. No one was actually using it! Indeed, we hadn't used it other than in testing throughout its development, and there was little need for the system at that time. Whilst we'd designed a system to capture experimental data, we had no automated experiments to run on it, and our science team were focussing on developing completely different areas of our platform, which weren't ready for automation! We'd built what was essentially a proof of concept, with no likelihood of being used in anger any time soon, but it was still sinking considerable time and effort.

But we were just trying to help...

There's always a desire to solve problems, especially in engineering - we're used to building things, so when faced with a problem, the natural solution is to build first, ask questions later! In my case, there was also some imposter syndrome at play - I felt like I needed to prove myself, and build a system to solve the problem as fast as I could. I was also used to working at somewhat later stage companies, where the problem space is well understood, and much of the initial groundwork has been done - the data is there, the use case is understood, but the two need linking in an appropriate manner. Our issue here was that none of us really understood the problem we needed to solve. In addition, we come from two very different disciplines - (software) engineering and science - and neither understood much about what the other did when we started. Hence, we didn't even know what the right questions were, let alone what the answers were. This meant we kept learning new things about one another, and so the requirements shifted time and again, meaning that the baseline to hit was constantly moving, and it took forever to release, and realise that we didn't need the software system.

Moving on

It wasn't easy, realising that we'd sunk a ton of time into a system we didn't really need. After some soul-searching, we realised that the MVP lesson was a useful one to follow (and indeed, we probably should've done so earlier!). Now we look at everything we're doing with a critical eye - if we don’t need it now, we don’t build it. We also realised that whilst we had high-level goals for the company, the route to those goals was far from clear, which meant we were floundering somewhat. We sunk a lot of time into mapping how we get from here to there, and how we measure our progress. This highlighted that we needed to build the groundwork for our data platform to sit upon before building the platform itself! Fortunately we will be able to reuse our proof of concept code, but it won't be for a few months yet. Right now, we’re best off solving the problems that the company has currently - standardising and automating the protocols that the science team are using on a daily basis - and we can revisit the data platform once we have the protocols running and producing data.

On top of that, we're now better at figuring out requirements, and our engineering and science teams are much better at talking to each other. We’re solving the problems that are holding us back now, and generalising from there. We can guess at what the future holds, but let's try not to build for an imagined future that may never happen!