Why the original 42 Problems MVP didn’t work out

There were three core, interconnected issues: It lacked ambition, users took too long to realise its value, and it was too rigid.

min read
November 30, 2023

This was original post on LinkedIn

The original MVP focused on tracking customer feature requests and blockers to sales. While I had been working on it for some time, it only became clear that it was the wrong approach when Alex joined the team. Consequently, we shut it down.

There were three core, interconnected issues: It lacked ambition, users took too long to realise its value, and it was too rigid.

Lack of ambition / Wrong beachhead

Tracking feature requests was never the end goal (see Thoughts on Structured Conversational Data). It was a beachhead, intended to establish usage and then expand to other use cases. Initially, I decided to build an MVP in this area, reducing my ambition to a problem set I thought was solvable.

What I failed to appreciate was that customers buy into this ambition, and frankly, it just wasn’t that exciting. Despite my desire to tackle other use cases, it was difficult for customers to visualise the potential and become excited and invested.

Like everyone else, I've been amazed by the speed of development from OpenAI and the industry as a whole. This has broadened what we can build. Techniques such as using LLMs to write Python code for crunching large data sets represent a massive breakthrough, the full implications of which are still not fully grasped. This only became a reality in July.

It’s hard to appreciate the progress you make on a daily basis. I learned to code in the last two years, and in the months and years since then, looking back, I realise that we can build incredible things and that we should pursue our original ambitions.

The Feedback Loop Challenge

Ideally, you want to wow your users within seconds. However, with tracking feature requests, it can take months before enough data is collected to demonstrate your value.

This is what made this problem space elusive; there are genuine inefficiencies when companies make their most costly decisions, often ignoring the silent majority and building features for the loudest customers. Most companies aware of these dynamics implemented hacks or used features from other SaaS tools. For us, as a startup, this created a massive uphill battle from day one.

Rigid

One of the most disheartening yet important lessons was to see the original MVP finally get its chance with a customer, prove its value, and then just become another report the following week.

I learned that sales organisations, relative to their counterparts, are highly incentivised, with simple goals and incredibly fast feedback loops. This means their priority is generally to unblock revenue or increase the pipeline; and the fundamental questions that answer these needs are always changing. It’s not a static environment with the same answer each time.

This meant that once people roughly understood the trade-offs involved in building different features or understanding different blockers, it was old news, and new questions had arisen.

Despite being an MVP, which is a toxic word in some circles, there was feature creep in our attempt to win “that customer”. This increased the product's surface area, and, at that point, we now had a web app with data visualisations. Combined with the endless state of Sales asking new questions, it became futile to evolve the product to meet these new demands.

We hadn't build a product that was loved daily, maybe once a month and that wasn't good enough. We needed to start fresh.


[Image art via DALL·E with the initial prompt of "abstract image of an mvp failing at a startup" and asking for it to be made more extreme following this trend]

Hugh Hopkins
CEO
Share this post

Posts you may like