Amit Out of the Office

MVP vs Core Value

Given the startup community’s (at least past if not present) obsession with validation prior to committing resources, I wanted to look at a startup case that I’m currently involved in where this issue came up, to share our thinking and the difficulty of the decision making.


In a podcast episode of Econtalk I recently listened to, the host Russ Roberts spoke with former professional poker player Annie Duke about her book “Quit”, which is broadly about the need to reframe quitting as a positive strategy in certain circumstances. I’ve been reading the book and as an aside I’m a fan of Annie Duke and her intellectual perspective, though I may have some areas of disagreement with her.

In the book, she brings up a mental model called the monkeys on a pedestal paradigm that is used by Astro Teller at Google X, the part of Google that spins off moonshot projects. In essence the idea is this, if you want to train monkeys to juggle flaming torches while standing on a pedestal, do you build the pedestal first, or train the monkeys? Since you can be fairly confident that you can build a pedestal, but you have uncertainty as to whether you can train the monkeys, you should start by training the monkeys. In effect, you should validate the riskiest part of the project first, before committing further resources (whether the riskiest part is also the most valuable part is a secondary question of interest I’ll get to below).

Duke applies this paradigm to the planning and construction of high speed rail in California. She discusses the disaster of spending that the project has become, in which the sunk cost fallacy and the need to save face have caused the State to sink money into the easiest and less risky, but not valuable parts of the project. This runs precisely counter to the monkey paradigm, which could have been used to more successfully manage the project.

While I like this example, I think it is more of a special case that does not generalize, which is not defined in her analysis. Perhaps most notably, Google and the State of California are both very large institutions (a serious understatement - Alphabet’s current market cap of $1.66T makes it more valuable than all but the top 15 entire countries stock markets, and it is about a third of the value of Japan’s entire stock market, which is the third most valuable in the world. Meanwhile, the State of California is the 5th largest economy in the world by GDP, slightly larger than India, and bigger than the UK, France, Italy, Brazil, Canada, and Russia). As large institutions, both time and capital are just assumed resources for them. This is pretty much never the case outside of the large institution context.

Now that we’ve established the paradigmatic mental model, I want to apply it to a startup case I’ve been working on.

To set the context, I’ve been working with a company over the last year that provides a form of ‘sales as a service.’ They help mostly startups dramatically improve their sales and marketing motions, by analyzing meetings and email campaigns, pulling in lots of data across those companies and their pipelines, and generating actionable insights that those companies can use to improve each part of their conversion funnel. They do this mostly with human beings performing a lot of manual labor using a constellation of third party services. What they do best, is understand sales. What makes a good meeting, how to price a product, how to close, how to listen to a prospect, how to do cold outreach, etc. Applying that special sauce is a laborious process that takes a lot of human power. All that human power has cost, and as a result, the service is expensive and only viable for a small group of companies.

In walks generative AI. The promise of using AI for this business is to automate a lot of the process that is performed by humans, reduce the cost dramatically, charge much less, and open the market to a lot of customers that the business can’t currently support. Sounds reasonable right? So, we put a plan together to start building out this new product, powered by gen AI. So far so good.

As we started to execute the plan, though, we hit an interesting strategic dilemma. Before describing that dilemma, I should mention that this is not a venture business, it’s a bootstrapped company living off its cashflows, without piles of cash just sitting around (so make a mental note to remove the Google and State of California assumption of unlimited resources). Like most startups without piles of cash, it was critical for the new product to be validated with real customers, to ensure it can drive value for them and revenue for the business, before sinking resources to it. The way to do that, as many startups do, is to build an MVP, enough of the product that it can be sold to users to see if there is some traction there, and for customers to understand and engage with the value they get from the product.

The thing is, in this case, that end value is basically the same as the one provided by the business currently, providing sales insights that help drive conversions.

Since the goal of the gen AI is to automate the product to make it more self service, we can start by just automating the first few steps in the onboarding process. We recreate a more SaaS type set up experience by allowing customers to sign up on the website, invite the product to their sales meetings and emails, and define some of their sales goals. Then, we put up a landing page with the new value prop and pricing and see if you can generate conversions.

But what does this actually test? For this business, the quickest way to generate traction is to automate the parts of the product flow that don’t rely on AI. The SaaS product onboarding flow. Then just have humans do the work on the backend (i.e. “Do things that don’t scale”). They already know they can deliver value on the human analysis side, and they’ve automated at least some of the process and reduced some of the cost. But what they haven’t done, is tested the core value of the product, and in this case, the biggest technical risk. If gen AI cannot run an analysis and pull out the type of insights that a human can, then the product will never work. It will always have to be done by humans to some degree, and the price will have to be raised again (although some automation would have taken place).

So what should the business do first? Should it follow the monkeys on pedestals mental model and validate the riskiest part of the product first? Or should it get into business through building an MVP that can actually be validated in the market, generate cash, and use that to finance further development? The problem with the first strategy is that there aren’t really resources to spend time building a proof of concept and delaying the process of getting into business, particularly when that POC depends on gen AI, so it is somewhat of a black box and unclear if it can be done and how long it will take. The problem with the second strategy is that it involves sinking resources into a plan that may be a dead end and not materialize because it runs into the same technical obstacles as the California high speed rail project.

The monkeys on pedestals model relies on the fact that resources and time exist to focus on what makes sense in the long term. That conveniently allows for ignoring resource or time considerations in the short term. In other words, in situations when time or resources are constrained in the short term (which is a lot of situations!), the right strategy could be to make some fast progress to generate momentum, and use that momentum to de-risk the most critical part of the project.

This doesn’t mean that the monkeys on pedestals model is not helpful. In some situations it is, and recognizing when you are in those situations is key to knowing when to use the model. Without that additional analysis, we won’t use the right model, or use it intelligently. I think this points to something deeper about using mental models in the real world. No single model will work in every scenario. More broadly, no dogmatic or ideological stance can withstand all edge cases. It doesn’t see reality and adapt to it, but tries to impose a theoretical and simplified view of the world, on it. And once that simplified view is in place, people start to believe that it is reality.

The map is not, and has never been, the territory. For a seemingly simple idea, It’s hard to overstate how much harm it has caused humanity.

Thanks for reading.