People fight reality. They fight it tooth and nail, with everything they’ve got. And anytime you are arguing or fighting with reality, reality will win. You can’t outsmart it. You can’t trick it. You can’t bend it to your will.
-Bill Burnett
In general, large enterprises are slow to adopt the latest technology. This is rings more true today with AI. At the same time, however, McKinsey recently released a report highlighting how widespread the usage of unapproved AI tools is among workers. That's a big gap between what organizations are buying and what their workers are using. Workers are using tools that they can find for free or cheaply, while the enterprise is in the process of determining what it should be buying.
Currently, large enterprises are slowing down their procurement processes with the addition of AI counsels to review every tool. Some of this is justified, considering all of the BS AI tools that have come out and continue to come out. Unfortunately, many of these AI counsels lack both quality checklists and sufficient technical depth to assess AI solutions and tools. This is understandable in world where AI snuck up on a lot of people and is causing tremendous uncertainty. The quick spike in volatility and uncertainty has resulted in a reaction by large enterprises where they purposefully slow down their processes to conservatively wait it out and not make any rash decisions. It looks something like this:
Figure 1. How large enterprises change their buying time after crossing an uncertainty threshold
Looking at the plot, as the uncertainty becomes too much for an organization, they begin to increase the time it takes to purchase. In essence, they throw up walls in an attempt to protect themselves. They wait to see how things play out and determine the best path forward. They aren't sure if all of the talk is hype, if the technology will make their organization incredible, or if it will be destructive. Even if they are laggards, the conservatism protects their organization in the near term due to the wealth of resources they have.
Let's walk through three reasons why enterprises aren't adopting AI
Fear and Uncertainty
Perceived lack of risk mitigation
Perceived lack of immediate ROI
Fear and Uncertainty
Businesses dislike uncertainty. I would almost go as far to say that businesses are uncertainty reduction machines. When a new highly impactful technology comes around suddenly, like generative AI, it creates a lot of uncertainty about the future and consequently generates fear. Most people don't understand how AI actually works or how to use it well. When that is combined with massive disruption potential, businesses go into a study mode. They want to understand deeply before they purchase.
At the same time, businesses have come from a land of broken promises with regards to lift and ROI that new technology can provide. Part of this stems from not discerning in advance if the technology benefits the consumer or the business. That is, is a red queen effect in play? Does the business need to apply the technology to keep pace with competitors where they won't see a profit improvement but consumers will see improvements in products and services? Or is this a technology that will improve the profits of the business directly? Many businesses have made poor purchases because they didn't understand the difference.
Perceived Lack of Risk Mitigation
The biggest difference between classical machine learning models and generative AI models is the stochastic nature of the output. This uncertainty about what the outputs will be, for the same input, makes it difficult for organizations to readily adopt AI solutions in production applications. As stated previously, organizations dislike uncertainty and they definitely dislike uncertainty in their outputs. Without solid, proven ways to mitigate errors in the output of generative AI systems, organizations have been slow to adopt the technology because they can't fully integrate these systems into their processes in a reliable way.
There are methods and techniques to keep these systems reliable but they require a high degree of knowledge and testing to ensure consistent results. However, there is a bigger problem lurking. Like any model, generative AI has flaws and errors. Yet, as people begin to trust a tool more, they start to question it less, and no longer verify the results. They begin to take outputs at face value. This is dangerous, particularly for junior employees who haven't built up experience and intuition, because it means errors are likely to propagate through an organization unchecked. Therefore, training and repeated reminders will be required with the usage of an AI system. It's also possible to mitigate this risk by enforcing a process that requires multi-point verification.
Perceived Lack of Immediate ROI
Lack of immediate ROI has been a plague for a lot of machine learning projects. You can look up various statistics from the past decade showing how 90% of machine learning projects fail or never make it to production. Typically, this is because business problems are not aligned with machine learning problems. You can find a remedy for that here. However, for the current batch of AI technology the perceived lack of immediate ROI comes down to a few things:
Organizational setup
Data Cleanliness
Unforeseen resourcing costs
The ROI for adoption of AI and machine learning projects and capabilities needs to factor in all of the above points. Organizations can only perform based on how they are organized and setup. When big data came along, most organizations were not setup to fully embrace data driven decision making. They needed to reorganize, create new processes, adopt new technology, and fill new roles. All of these require time and money to perform. Right as businesses were orienting to big data and data mining, machine learning came along at full force and organizations tried to adapt to this trend as well. Mostly without success. The reasons why are many but fundamentally, organizations didn't put in the resources or effort to set themselves up for success. Players in the market that were able to implement new technology well had massive growth while those that didn't fell by the wayside. Unfortunately, many organizations were caught chasing shiny objects for fear of missing out. But memories are short and now organizations are facing a similar issue with AI technologies. They will need to reorient, potentially completely modify their organization, to successfully adopt AI technology. However, here's a good mental model - when a decision or event is high impact and low frequency, it will be much more expensive and take much more time than expected. This is because when both the stakes and the costs of failure are high, many more things are considered, deliberated, checked, and verified.
Besides modifying the organization, data cleanliness and unforeseen resourcing costs are big items that reduce the short-term ROI for AI adoption. For AI and machine learning to work well, you need to have high quality data curation and processes. Ideally, your organization has golden data sets and pipelines with great data hygiene. Great data hygiene provides a better signal to noise ratio to AI models, allowing them to perform better. These performance gains are non-linear, so a 10% improvement in data hygiene could lead to a 2-3x performance improvement in AI models. Additionally, AI is about automation, and when you are automating a process, the risk increases because the intervention points are less. This means you want the cleanest possible data to pass into the system.
Dealing with all of these system improvements requires additional infrastructure and tools to be purchased, along with engineers and data scientists to perform the work. There's not only the cost of infrastructure improvements but also the costs of migration and system verification. There's a reason "digital transformation" at organizations takes 10 years - lots of complexity and making sure nothing breaks in the process. All of these costs tend to not be factored into the initial ROI calculation for the short-term benefits of AI. This has caused organizations to slow down their decision making to see how things play out and to figure out the best way to adopt the technology. However, when they convert their ROI calculations to the long term, they tend to see that the payoff makes a lot of sense.
Gravity
If you're a company selling AI products to large enterprises, you have a bit of a gravity problem - you can't change gravity. You aren't going to be able to change the internal processes of large enterprises and so you'll need to plan for them. If you're a large enterprise, you need to determine if you're looking at AI correctly. Is a lack of speed in adoption going to work against you? Will competitors that adopt more quickly build a compounding advantage? Either way, adjustments are going to be made. AI is a gravity problem and it's here to stay. Different entities will react, adopt, use, find disaster and success with AI at varying time scales. Being aware of challenges and concerns facing an enterprise which you want to interact, will make it much easier to find a path forward.