“What does it mean to be contrarian? It does not mean simply doing the opposite of what the majority does — that’s just consensus thinking by a different guise … The most contrarian thing to do is to think independently. It is not without its risks, because there is no cover from the crowd, and because it frequently leads to conclusions with which no one else agrees.”
-Bruce Gibney, Founders Fund Manifesto.
A few weeks ago,
put out his Five Contrarian Theses on AI and it will be fun to debate them here. Why? Figuring out where the future is going is hard work and requires playing with and challenging ideas. One thing I enjoy in the people I work with is that they have an accurate understanding of the world but they get there in a very different way than I do. By working with highly intelligent people with different ways of thinking, a clearer representation of the truth of the world emerges. This has been proven through ensemble models which use very multiple high-quality models to achieve better performance than any single model could.A mental model I have for this is as follows. Imagine all of the people you are working with or debating with are sitting around a table. Each person has their own viewpoint on the issue and lenses through which they view the world. The truth/best course of action is typically somewhere in the middle of the table. It might be closer to one person or set of people but it is rarely completely aligned to just a single individual. Debate and discussion allow the group to explore the issue and find the truth/best action. In that vein let's step through the five theses and see where we end up.
1. Horizontal LLMs will lose
Horizontal LLMs will lose. This thesis comes in two flavors. The first is, LLMs generally will lose when they get displaced by a new and better technology. Many of the people I know who work at the cutting edge of AI believe LLMs will not get us to AGI, and that the next leaps will come from something new, not just more data. The second version of this thesis is that LLMs will verticalize to the extent it probably makes more sense to use multiple versions of vertical LLMs than one horizontal one, for most applications. The latest wave of AI has been all about training data and compute, and if these two trends (new architectures, verticalization) make those less powerful, it has interesting implications for who wins in certain markets.
Disagree. I'll disregard Rob's first part of this which is that LLM's will eventually be replaced by better technology on the road to AGI. That fact that one technology is replaced by another is consistent throughout history outside of a few rare instances, but the question is always on what timeframe will it be replaced. I do agree that LLMs alone will not create AGI. There are several more components that are needed to do that.
Let's focus on the second part, that verticalized LLMs will be the prime way people interact with systems. While I agree that in practice people will use finetuned or "verticalized" models, they will be built upon these "horizontal LLMs" as main foundational layers. What people need to realize is that transformers are a new type of computer. People are building applications on top of these new computers that make the experience more useful. But those applications need an operating system on which to function. Horizontal LLMs are that operating system.
So, when we think about horizontal LLMs winning or losing, I actually think they become a utility. In my mind that's winning since everyone will be using them. However, if you think of losing as not being the final end product that everyone is using, then sure I guess horizontal LLMs will lose. But I don't think anyone would say that electricity lost or Microsoft Windows lost.
2. Specific Impact
AI won’t impact full markets the way previous tech waves have - but instead will only impact specific companies. This thesis is a bit nuanced, but I will try my best to explain what I mean. Most of the time, VCs have these ideas like “cloud is going to cause XYZ changes in enterprise software” or “mobile is going to mean ABC for fintech.” They address tech changes in terms of how they impact markets, not individual companies. This thesis says that AI won’t have a consistent impact across the companies in a specific market. The way to think about AI isn’t “here’s what it will mean for this industry.” The reason this could happen is that AI fundamentally applies intelligence and learning to steps in the corporate value chain, and some companies have value chains that are much much more amenable to taking advantage of that than others. In fact, most companies try to build their workflows so you need as little intelligence and learning as possible. Best practice is to systematize everything you can. So one possibility here is that the companies that benefit most from AI may have more in common by having similar steps in their value chain rather than in terms of the product they make or customer they serve.
Partially disagree. AI is a way to augment the human mind and communication. That means that any industry with a heavy emphasis on those two elements is going to be impacted. I'll completely agree with Rob that some companies are built in such a way to make them more amenable to adopting AI. My disagreements lie in the fact that any company not currently setup to take advantage of AI in a market that requires it will be left behind and eventually die off. Since those companies will disappear and the winners will have outsized impact, the overall effect on the market will appear somewhat uniform to the companies within them. Essentially the specific companies that Rob refers to will become the market. Each market will become bimodal with adaptable companies benefiting and low fairing companies falling behind. Very Darwinian.
3. Competitive Advantage
AI will kill most forms of competitive advantage. Some strategic thinkers have been arguing that as tech moves faster and faster, long term competitive advantages go away, and all you have is a series of short term fleeting competitive advantages. Google’s “we have no moat” memo is a good example of where this thinking currently is on AI. If intelligence and execution both eventually become commoditized from AI, where will competitive advantage accrue? What will it mean for early stage investing?
Partially Agree. Due to the vagueness of this thesis, I'll make the modification to "current forms of competitive advantage". Competitive advantages ebb and flow with time, that’s the Red Queen Effect. The types of competitive advantage most readily affected by AI are those based around continuous decision making, designs, and mental labor. AI can reduce relative advantages to small values such that those prior advantages only make sense in higher frequency situations. If you sell only 10 times a year, it probably won't help. If you sell one billion times a year you might still get an advantage. However, the forms of competitive advantage will shift. For instance, not everyone will be able to use or afford the same types of AI. The differences in decision making between these AI's will cause new competitive advantages to emerge.
However, what AI doesn't kill is ownership and distribution advantages. These are forms of competitive advantage that AI doesn't affect at all. AI isn't going to change the media rights that Warner Brothers has over its content nor is it going to affect the inventory and eyeballs controlled by Youtube. My hypothesis is that technology cements the power of big players that can harness it.
4. Economic Bifurcation
AI will bifurcate the economy into real and AI worlds. I’ve written about this before but, I’m not sure that many people agree so I’m including it as a non-popular thesis. When I speak on this topic, I always point out that at some level you run up against the laws of physics, and AI can’t change those. Concrete dries at the speed concrete dries. AI isn’t going to help us build new cities in days instead of years. Some things still take time. Investors are pouring money into industries where, at best the benefit of AI might be 20-30%, not 20-30x.
Agree. If we think about what AI does, it takes in information about the world, automates decision making processes, and takes actions. Nowhere in there is it able to change the physics of the world. While AI can't speed up physical processes directly, it can speed up the process control of said systems. Which makes some things more possible, like controlling fusion reactions, or speeding up others like manufacturing defect detection.
Here's a good heuristic to judge if AI is actually going to improve something. Let's say we get to a point where an AI is near human level performance (we aren't there yet). That means the AI should be able to perform the same types of tasks that you can as an individual. If there's a physical process that you yourself cannot greatly affect with a vast amount of resources, it's unlikely an AI will be able to as well.
5. Agents
Customer acquisition channels will collapse into agents. Like many of you I’ve been following AutoGPT closely. The first order effect, if agents become the norm, is simply that we use them more for more tasks and maybe we use fewer software application interfaces. But I think not enough people are thinking about the second order effect of that which could be - the agents become a major customer acquisition channel. Why go to G2 or answer a cold sales email or click on an ad when I can just ask my agent what I should do? It may not happen because the companies building agents have a lot of issues to solve and won’t be thinking about that yet, and the companies selling you stuff have every incentive to exploit new channels, including agents. It will be interesting to see how it plays out but as an investor, you have to think about this long term. If you like a company because of their PLG or community driven GTM, it’s possible those are irrelevant in a few years.
Partially Disagree. I want to agree but I need to disagree with the completeness of the assertion. I do believe that agents will become a highly performant acquisition channel. However, this thesis assumes that people consume all their information through their agent and nothing else. Most people seek information from multiple sources and some just enjoy the act of researching. I personally have always found it beneficial to listen to multiple sources for information.
Additionally, people can be reached by more than just how they interact with an agent even if that's their main communication channel. The content that people consume can have product placements, as they walk around town there are billboards and radio ads, and people hear things by word of mouth. In my experience advertisers will always find effective channels if the main ones start drying up.
I can imagine a scenario where the strength of customer acquisition through agents forces advertisers to run sophisticated awareness campaigns. I can foresee a path where AI can improve the conversion attribution of these campaigns through location data and eye tracking. Think AI to understand where a person is and what they are looking at while they walk around a city with their Apple Vision Pro on.
Putting it all together
Let's look at all these theses in aggregate and the discussion we've had. What holistic observations can we glean about where the world is headed?
Horizontal LLMs will become a utility that is used as a base layer to fine-tune models for more verticalized models that an end consumer will use. This ultimately has the effect of creating AI agents that can augment individuals and companies. Additionally, customer acquisition will shift as new agent based information channels are created. The combination of these fine-tuned models and agents will remove some traditional types of competitive advantage that are not based around ownership or distribution control. This will force new competitive advantages to arise that may be based on how good the AI you own is. This has the implicit effect of affecting companies disproportionately within a market that are currently configured to take advantage of AI. The impact to each market will be based on where it lies on the continuum of physical vs virtual. Physical type markets are much less likely to be affected by AI due to physical laws that cannot be exceeded. So if you want to be successful with AI in the future what should you do?
Focus on where AI can have an outsized impact. AI can't change the fundamental laws of the universe, so plan accordingly.
Specificity will outperform generalization. Focus on specific solutions to specific problems. These will have to be built on top of generalized/horizontal layers.
Understand what types of company configurations are best suited for the use of AI and realize that new company designs will emerge as well. This also means watch for which companies are flailing/falling behind/losing growth from not adopting AI.
Determine which competitive advantages to go after, understanding that many current ones will disappear while simultaneously realizing that new ones will emerge.
Take advantage of agents. Both in augmenting labor and in how end consumers are using them.
I always welcome feedback to help refine my thinking, so please reach out if you agree or disagree. If you've enjoyed this or would like to see Rob and I debate deeply on a single topic around AI, please reach out.