"The right information at the right time is deadlier than any weapon."
-Martin Connells/Dolores Abernathy, Westworld
I previously wrote about the increasing value of security as AI is adopted, particularly for existing platforms and societal infrastructure. The main concern is filtering through a deluge of content and dealing with changing interaction dynamics. However, there exists a slightly more nefarious application of AI that is bound to become more prevalent. One which I will illuminate here. Think of it as an advanced bot scenario. It is further complicated by the fact that people are becoming more accustomed to interacting with AI as individuals.
The Scenario
Take a social platform like Reddit. Each user goes about their experience writing, posting, and interacting with others. Today we have AI agents that can mimic this behavior. Imagine creating a network of these agents that seemingly go about digital lives as regular people, unbeknownst to other users. Perhaps 1,000. Maybe 10,000. Or 100,000. Even 1,000,000 AI agents mimicking daily digital life. Some of them may focus on the best cookie recipes while commenting on nuances of pop culture. Others may find the latest scientific papers and summarize the findings to instigate discussion of their merits. However, at infrequent times, these agents coordinate to promote a message, shift messaging, or otherwise trigger a response that they have been biding their time to release.
I call it the Synecdoche Conundrum. Why? It's comes from the plot of the movie Synecdoche, New York. In the movie, the main character gets a MacArthur Genius Grant and attempts to rebuild New York City as a live play. That is, actors live the lives of individuals going about their days within a replica of NYC, waiting to showcase their message to an audience. In a similar vein, AI bots will act like regular people going about their days, but waiting to push the messaging of their creators. Thus the Synecdoche Conundrum is as follows:
The Synecdoche Conundrum: Given a sufficient quantity of AI agents successfully mimicking real human activity on a platform, how can you distinguish between real people and fully digital actors? How can you trust the content on a platform? How do you identify anomalous behavior that has become part of the broader signal?
Detecting Oddities
At first glance, you might think the behavior for this scenario would be easy to detect. Clearly, there should be some way to detect false actors within a platform. For instance you could point to Meta taking down nearly 10k accounts associated with election interference. However, identifying and taking action against swaths of bots requires atypical behavior be observed in their actions. For instance, the bots Meta took down were continuously spewing foreign propaganda and mostly interacting with each other. An easy, constant signal to pick up on as it differs from the general populace.
How does this become an issue for those who analyze network dynamics? Businesses tend to use social media analysis to get a pulse on what consumers think of their business. Any social media sentiment analyses looks at an overall projected signal. Subtle signals get amplified, obvious outliers tend to be removed, and noise is cancelled out. This results in a sort of "average" takeaway of how people are discussing the business, denoted by the directional level of sentiment towards various attributes. Note that the way to filter out outliers tends to be more art than science and is meant to remove oddities. There are a variety of techniques are used to accomplish this but the majority fall under the classification of anomaly detection.
In detecting anomalies, there is the inherent assumption that odd behavior looks fundamentally different from regular behavior. However, these techniques fail when odd behavior is of such a large quantity that it is a substantial portion of the observed behavior in the system. In essence, the odd behavior is no longer odd. While these may be very minority positions, they are not inconsequential. At the scale of thousands of coordinated AI bots which can mimick normal user behavior, these interactions start to look like regular noise instead of an anomalous signal when looking at the data. If every so often their messaging is tailored toward a predetermined cause (say very left or right wing), that blends with normal user behavior since every user has certain predispositions.
Let's make it a little more real with a concrete, albiet simplified, example. Assume we have 10 users on a system and we want to know the sentiment for 6 different topics. We'll score their sentiment on a scale of -5 (negative) to +5 (positive) to understand how users view the different topics. We'll compute not just the average sentiment for each topic, but also the average of only the positive values and average of only the negative values. This will tell us how extremes change. We'll also do the same for users, as opposed to topics, to understand if a user tends to be more positive or negative in general, as well as their behavior within each side of the scale.
Figure 1. Baseline case of human behavior with averages for topics and users.
Here's the baseline of random behavior across human users and selected topics. For this example, note that average sentiment for topic E is -0.8. Now let's add five different AI agents that want to make topic E more positive, without affecting much of anything else.
Figure 2. Topic and user averages showing how AI bots can modify behavior undetected.
AI bots were able to flip topic E from a slightly negative topic to a slightly positive topic, all while masking normal behavior and not being extreme. Notice that we compute the change (delta) between the averages of before and after adding AI bots. We were able to increase the topic average for Topic E by nearly 2 points while keeping the change everywhere else to about 0.5 point or less. If you look at the ranks, which is where each user stands in relation to the others, you'll notice that none of the AI bots are not in the top 3 or bottom 3 for any topic. Meaning that bot behavior can't be detected just by looking at extremes.
Impact
The Synecdoche Conundrum is a bit reminiscent of the investor pitch scene in Westworld. How do you identify robots from humans? A question that might bug you even more - if you can't distinguish, do you even care?
Perhaps surprisingly, people are becoming more comfortable with treating AI as real entities in their daily lives. There's already groundwork laid from the myriad of characters people interact with in video games. Character.ai just released a group chat functionality to talk with multiple AIs. Meta is releasing AI bots of famous individuals across their various platforms. All of these point to a blurring of the lines between human and AI.
Many of the worrisome initial applications for AI bots center around election interference. It isn't hard to imagine that a legion of undetected bots can push and shift messaging to influence the thinking of the general public. I'm not making this up out of thin air. At the eleventh hour of writing this, I was made aware of two security experts, Bruce Schneier and Latanya Sweeney, who have both discussed this topic but under slightly different terminology. They discuss persona bots as means of affecting influence.
Generative AI tools also allow for new techniques of production and distribution, such as low-level propaganda at scale. Imagine a new AI-powered personal account on social media. For the most part, it behaves normally. It posts about its fake everyday life, joins interest groups and comments on others’ posts, and generally behaves like a normal user. And once in a while, not very often, it says -- or amplifies -- something political. These persona bots, as computer scientist Latanya Sweeney calls them, have negligible influence on their own. But replicated by the thousands or millions, they would have a lot more.
It’s not just election interference though. If you are a business or brand, how can you be assured that a competitor isn’t using social channels to seed poisoned information for you to base decisions on? One potential scenario would be where a competitor launches a counterintuitive campaign to subtly promote your brand in such a way that when you analyze the social media content, the results of your analysis point you towards decisions that erode your competitive advantage or entrench those of your competitor.
For instance, if consumers of your product category really care about quality, a competitor could mask this signal on platforms by pushing thousands of AI bots to show that consumers seemingly love your product for the low cost. Perhaps they also send out marketing studies about consumer attitudes shifting from quality to lower cost. If you double down on creating a low-cost product, when consumers actually want quality, you will likely lose market share. Meanwhile your competitor improves their quality, since they know the actual signal coming from consumers.
Remarks
AI isn't going away. In fact the pace of usage is going to increase. Our existing complex systems, platforms, and infrastructure will need to adapt in order to take on these upcoming changes. The Synechdoche Conundrum is one upcoming stimulus and stress test to our system. It works by applying AI technology to mimic human behavior, which is available today. Being able to deal with this scenario makes our systems stronger. It's an exciting time to develop new technology that can help us adapt to our changing world.