The Interplay of AI, Power, and Regulation
Exploring the Intricate Dance of AI Regulation, Power, and the Uncertain Future
All the forces in the world are not so powerful as an idea whose time has come. -Victor Hugo
Back in December 2022, I wrote my predictions for 2023. One thing I predicted would happen this year was regulation coming for AI. I wrote:
Regulation clampdowns on AI: There is a growing feeling of concern around the power of big tech and AI within government. I believe this fear will cause a regulatory crackdown on AI and tech in general. I’m not sure what that means for the ecosystem yet but it will result in the need for a zero trust framework for AI. WeChat has already banned the use of ChatGPT on its platform.
Every day we approach closer to having AI regulation implemented this year. The US, Europe, and China are all looking at and implementing various forms of AI regulation. AI is in a tricky position because it has the potential to become a super weapon and a major competitive advantage. The goal of any regulation around AI should be to find a way to extract the upside of the technology while safeguarding against the dangers and downsides. Personally, I believe we are about five years behind in understanding how to regulate AI. Mainly because the difference between how quickly AI is progressing vs the pace it takes to create high quality regulation is staggering. AI can change and improve at a blistering speed compared to regulation, which likely makes regulation semi-useless.
Last week, Sam Altman, the CEO of OpenAI, testified before congress about AI regulation. He deftly hit members of Congress where they are most concerned - their power. A key concern of his was AI's ability to interfere with elections. Obviously, members of congress are going to be incredibly interested in anything that could affect their hold on power. Sam laid out a three point plan about what regulation should entail:
Form a new government agency charged with licensing large AI models, and empower it to revoke that license for companies whose models don’t comply with government standards.
Create a set of safety standards for AI models, including evaluations of their dangerous capabilities. For instance, models would have to pass certain tests for safety, such as whether they could “self-replicate” and “exfiltrate into the wild” — that is, to go rogue and start acting on their own.
Require independent audits, by independent experts, of the models’ performance on various metrics.
Let’s dive in and understand the forces at play that will affect any regulation that is created and why OpenAI is pushing for a certain type of regulation.
Fear and Uncertainty
Let’s get the obvious out of the way first. A palpable sense of fear seems to blanket our collective thoughts on AI. While people have been living with the impact of algorithms for decades, they are now just starting to realize the impact of those algorithms on their lives. The revelation has created fear in many and the fear is real. The fear is a manifestation of our awareness of AI's transformative potential. Compounding this is the immense uncertainty about the future created by the potential of AI. Many people are unclear on what actions to take around long term decisions for themselves, their families, and their careers in an AI heavy world. Uncertainty, paired with the power of AI, can serve as an incendiary force for fear.
Public sentiment plays a critical role in policy-making. If a policy aligns with the public's interests, its implementation becomes smoother as lawmakers face less resistance from their constituents. AI, due to its potential to offer power and time efficiency, is a matter of keen interest to both governments and corporations.
While the general population has fears on the impact of their lives, lawmakers are simultaneously concerned with AI’s potential to threaten their ability to hold power. We are about to enter an era of hyper-personalization, where the content displayed to you can be crafted exclusively to resonate with you. While the immediate uses might be around commerce, hyper personalization will come to political campaigns and voting as well. If you think that can’t happen, know that the GOP created the first AI generated attack ad. This more than anything scares politicians because it makes their own futures less certain. The threat to a lawmaker’s future, more than anything, will cause them to take action much more quickly.
Regulation: Power Over Impact
The testimony of Sam Altman, provided a glimpse into the power dynamics behind AI regulation. While Altman’s push for AI regulation is portrayed as a necessary safeguard, it could also be interpreted as a strategy to curtail competition. The regulations he proposed were framed to address the public’s and lawmaker’s fears while keeping OpenAI in a position of strength.
Regulations, while purporting to serve public safety, can act as formidable barriers to entry in the market. Newcomers, in their bid to meet regulatory standards, may have to expend valuable time and resources, which could be otherwise channeled into innovation and growth. Large corporations, with their vast resources, may comfortably navigate the labyrinth of compliance without significant disruption. However, for smaller companies and the open-source community that lacks similar resources, these regulatory demands may pose a daunting challenge.
Looking at the three-point plan proposed by Altman, what do we see? First, create a government agency that can prevent people that don’t comply with the rules from using AI. Perhaps it will require companies and individuals to have the equivalent of a driver’s license, but for AI use. Second, create a set of safety standards and force models to pass these tests. The individuals that get to craft the tests have great power over what AI architectures can be used and the amount of time it takes to develop an AI system. It’s well known that we don’t have a good theory for why our AI models work, so these safety tests will likely be a notoriously gray zone open to interpretation. Which is great for companies that are helping create the tests and have the resources for many lawyers. Third, require independent audits by independent experts. No one would argue with the importance of third-party validation. However, there is a cost to hire a third party auditor. These auditors are also supposed to be top experts and to think that they won’t be incentivized to play favorites is naive, especially when it will be difficult to find an expert that isn’t already employed in some capacity with an AI company.
Consequently, being in a position to help craft regulation enables OpenAI, and similar giants, an advantage over their competitors. All of this points to OpenAI gaining time, resource, and reputation advantages over other current and potential competitors.
Eroding and Cementing Power
In the grand theatre of history, technology has been the great disruptor. It has often ushered in periods of societal chaos, yet it has also been the catalyst for transformative advancements. Such periods of upheaval have been fertile ground for new actors to emerge and seize power. We’re seeing this right now with AI. Regulation is arising in an attempt to quell this situation. If you want to understand how regulation is going to be shaped, look at the incentives of the different parties engaged in its creation.
Those already in positions of power, who can leverage or acquire the new technology, often find their power further consolidated. In contrast, those who resist adopting new technology or prefer to ignore its rapid proliferation risk fading into obscurity. The mental models that may be most helpful here are the Innovator’s Dilemma and the Red Queen Hypothesis.
What’s fascinating is that the shifts in power will play out at multiple levels. While we’ve been focused on companies and society, there are geopolitical impacts as well. The main tone in Washington is that no one can out innovate the US. While that may or may not be true, the US is seeking to be at the forefront of AI regulation with its position bolstered by the top AI companies being based here. By being seen as a leader in regulation and having other countries accept the general principles of any created regulation, the US has the ability to call any country not abiding by similar laws as “unethical”. Using this as a premise, the US could launch economic sanctions in an attempt to slow down other countries from catching up to the US on AI research. While this is just one path, there are many different and interesting geopolitical actions that will occur for countries to gain an AI advantage over each other while attempting to maintain AI safety.
Envisioning the Future
We indeed live in interesting times. The rapid advancement of AI technology threatens to change how traditional tasks are performed and disrupt existing power structures at an unprecedented pace. What is inevitable is that AI regulation is coming. The regulations will need to address society’s fears while still supporting the benefits of AI. Regulations will most benefit those with the ability to influence and shape policy.
Most likely, the initial versions of AI regulation will be wrong. There will be mishaps, errors, voids, victors, and villains from whatever regulation does arise. The regulations will be wrong because lawmakers have not grokked the impacts of AI and are bad at predicting the future. It’s not their fault, as predicting the long term future is hard. Consider Jeanne Calment, the oldest recorded person, born in February 1875 and who passed away in August 1997. She was born in a world without light bulbs, which were invented in 1880. By the time of her death, an AI had defeated the world chess champion, Garry Kasparov, in May 1997. The amount of technological change that she witnessed is mind-boggling. Now understand that our future pace of technology is much faster than what Jeanne Calment experienced. It is hard to fathom what a child born last year, before ChatGPT existed, will experience in the next century, let alone the next 20 years.
In the AI-imbued future, we must remain vigilant about the balance of power, the efficacy of regulation, and the ability of technology to democratize or consolidate power. In this unpredictable dance of fear and power, we must strive to ensure the steps we take lead to a better future for all.