Make no mistake about why these babies are here - they are here to replace us.
- Jerry Seinfeld
On average, current AI systems tend to act like mediocre individuals. In some areas they are much worse and in some areas they are much better, which depends on the individual. For instance, DALL-E is a much better artist than me, but ChatGPT can't do math nearly to the level I can. As individuals we tend to not be great in all areas, and so a technology that, in combination of averages, appears better than us, makes us gravitate towards its use. However, when looking at specific areas, human experts mostly tend to be better than AI algorithms (there are some areas where AI can beat experts, but it takes A LOT more effort to improve past human expertise). You can imagine drawing distinctions between what experts can do, what AI can do, and what mere mortals can do. Since we are all mere mortals in most areas, it is natural to contemplate where AI replacement can occur despite not having the greatest expertise.
Due to this paradox between a perceived replacement of skills and a gap in total expertise, we face a quandary - how do we continue the lines of professional succession? That is, how do we keep our top expertise, that pushes the boundaries of fields, if we use AI to replace or reduce the lower ranks who will grow into experts in their field. As a colleague of mine likes to say, "how do we raise our young?". Whether explicitly stated or not, most fields are apprenticeships. Experts become experts because they get exposure to weirdness, oddities, understanding what fails, what lasts, what works, what doesn't, experience of context outside the field that affects how the field operates, and a myriad of other tacit knowledge.
Matt Levine recently wrote about this situation when discussing the impact to professional services firms:
But what does that mean for the staffing and training of professional services firms? Like if your model is:
ChatGPT can do the lower-level research work that would otherwise be done by junior employees, but
A senior partner with lots of experience, detailed domain knowledge and human judgment needs to supervise ChatGPT and make the ultimate decisions,
then where do the senior partners come from? Traditionally the way law firms, accounting firms, consulting firms, investment banks, etc., work is an apprenticeship model: You come in, you do research and grunt work and modeling, you learn the stuff, over time you build experience and knowledge and judgment, and eventually you become the senior person making decisions (and supervising the junior people). But that is an expensive model, and if you can hire ChatGPT to do all the grunt work and dispense with the junior people, you might just do it. But if you don’t have any junior people who are doing the grunt work and learning the business, how will you find new senior people to replace you?
One possible answer is “lol obviously ChatGPT will replace you”: Large language models are rapidly getting more skilled, and in a few years maybe they’ll be better at law and consulting and accounting and investment banking than even the most senior partners, and whole industries will simply be automated.
What's stated above has an implicit concern of losing expertise gained over time if we replace staff with AI. There's a lot of knowledge not included in large language models. Some of which is because it hasn't been written down or exposed publicly, and some of which can't even be described in language. Is this a valid concern, or how should we think about this?
Augmentation
Replacement is what scares people. And what scares them the most is a fast rate of replacement, before there is time to adjust. Lots of industries and professions change or disappear over time, but historically the process has been slow. We don't have individuals who control traffic lights anymore. We don't really have horse and buggy drivers (except for entertainment or romantic purposes) anymore. However, thinking about AI as replacement is the wrong mental model for how to use it well. Thinking about AI as replacement will cut off lines of succession in the field. The better model is to think about it as augmentation.
Augmentation lets you increase your capacity. It lets you deal with more complexity. It gives you training on weirder environments. You can simulate stranger what if scenarios. You can more quickly understand the effect of the outside world on what you are doing. Augmentation enables you to do more than you were capable of accomplishing before. A car augments humans to move at speeds that even the fastest person could never reach.
In the top engineering schools, you learn to do everything from scratch. Building up designs from first principles. Developing intuition that allows you to deal with more complex and higher-level topics. Only after this intuition is built, are you given access to tools that allow you to increase your output by orders of magnitude. Instead of creating pages of hand calculations for a single element, you can check entire designs with a single program. After learning to check a design with a single program, you learn to iterate through many different scenarios on the same or modified designs. The reason you graduate to more advanced tools is that you need to be able to assess when an error has been made and a result is believable. The right mental model to think about AI is the next graduated tool to continue to augment your capabilities. It does not replace intuition. In fact, an overreliance will start to diminish your intuition and force you to over-trust the AI.
The next question is - how do you augment well? You want to gain enough familiarity with a tool that it saves you time or allows you to do more with the same amount of time. After that point, you want to use it to accelerate your knowledge of weird stuff. The most important events happen at the edges, where there is uncertainty, where there isn't a playbook, where things are not routine. You want to find ways to gain exposure to these low frequency or uncharted territories because it enables you to act swiftly while others are still grappling with the best path forward. Experts are experts because they've had exposure to so many scenarios that patterns emerge, and the exposure to more events allows them to handle odd situations better.
Ratcheting Up the Baseline
The other reason to shift thinking away from replacement is that the use of AI is only a temporary advantage. In the long run, the only individuals/companies/firms that will survive are those that know how to use AI as a tool. To the point that it becomes common place. However, in the near term there will be a dispersion of usage. Those who know how to use it well will quickly outpace those who do not. But of those that remain, a Red Queen Effect will emerge (I've written about the Red Queen Effect several times in the past). This is to say, the baseline of those that compete in a market will have heavy AI usage in their toolkit.
has released research showing that using AI helps bottom performers do much better, which begins to close the gap on talent between practitioners. This insight should not be taken lightly, as it points to the competitive pressures of AI usage.If AI usage is commonplace, the question then becomes - from where else will you derive your competitive advantage? It could be ownership of resources, networks, distribution channels, differentiation in AI usage, data access, internal knowledge, processes, or a myriad of other things. Ultimately, what is owned and controlled becomes more valuable.
This is why it is important to maintain lines of succession. The expertise of how to perform at the top of a field requires learning how to adapt to changing environments and how to use the tools that have been accumulated and built up over time to remain competitive. To raise our young well requires long-term thinking about continuity and overall effectiveness instead of short term thinking that cuts off long-term advantages. Even if you have a boat, you still teach children to swim. Not because you expect a boat to sink, but because in the event that it does, they still have a chance to survive.