Content builds relationships, relationships are built on trust. Trust drives revenue.
-Andrew Davis
There’s a super interesting experiment that’s gone around the web called the “Rubber Hand Illusion” (see video below). Essentially, a test subject has one of his hands hidden from his line of sight while a fake hand is then placed within view of where the hidden hand normally would be. Then the experimenter provides various sensations to the fake hand that the test subject is actually able to feel. Some people think it’s about how the brain can be deceived. My take is that it actually shows how quickly the human brain can integrate an external tool while simultaneously establishing feedback loops. I think something similar is happening now between the various AI Chat programs and our minds. We are beginning to use these tools as extensions of our minds and that has interesting implications for how they might be monetized. A strong driver for how AI Chat becomes monetized is based on the level of trust we have in the system.
Establishing Trust
What's the current advantage of the various AI Chat programs floating around? They allow you to do things faster. However, in order to take advantage of that speed you need to either have verification methods in place for the output (such as reviewing the outputs or something more systematic) or you can implicitly trust what it provides. The former is slower but more accurate, since you need to build the methods or take time to review, while the latter is faster but more dangerous.
Let's say you aren't cavalier and want to verify the outputs that start coming through. Let's also say you use these chat systems for a lot of quick fact finding. At first you are likely going to verify every output on sources like Wikipedia and maybe dig deeper if things seem a little off. For instance, you might ask AI chat to solve the potato paradox:
Fred brings home 100 kg of potatoes, which (being purely mathematical potatoes) consist of 99% water (being purely mathematical water). He then leaves them outside overnight so that they consist of 98% water. What is their new weight?
Then reveals the answer:
The surprising answer is 50 kg.
Since this answer is counter-intuitive you might search Wikipedia to verify the answer along with an explanation. You might also google the typical make up of potatoes to see that they are usually about 80% water, not 99%. However, the math in the original problem still holds, so you move on satisfied with the results. As time goes on and you begin to find that the system returns more and more correct information, you are going to spend less time verifying and likely not verifying as deeply. At this point your trust in the system is growing.
At some point in the future, you will choose not to verify the output. It might be something simple like "How long does a lion live for" or "What are the proper proportions to make a perfect peanut butter and jelly sandwich". It will start simple but you will remove your verification step. Either because the impact of being wrong is low or the results align with what you already know. As long as the system continues to provide correct results you will be on a path of gaining trust in the system. Until eventually you trust the system implicitly and only verify the outputs for things that strongly conflict with your understanding of the world. At this stage, you are no different from the cavalier individual who implemented no verification methods and the chat system is now an extension of your mind.
If you think to yourself “that would never happen to me”, you are wrong. In fact it already has. The average amount of work people are willing to do to find “correct” information has decreased over time. How many people click on the first google search result and go no further? 25%. How often do people use the autocomplete feature on google? 23%. How much of search traffic uses results from the first page? 95%. You might say this is how much Google’s product has improved but it really shows how our willingness to do more in-depth work has decreased. We have transitioned from having many conversations over a period of weeks, to going to the library for information, to using a web index, to using google to find information, to only looking at the first result in google, to now asking AI Chat to do the work for us.
Monetizing Chat
With implicit trust in a system like AI Chat, there is the potential that trust can be stretched. While a strong argument can be made that AI Chat will monetize access, I wouldn’t be surprised if the web becomes more metered. In fact, that’s part of the underlying vision of web3. In addition to that view, let’s talk about options that are both surprising and interesting. As AI Chat is progressing quickly, various monetization strategies have emerged, such as bidding for plugins, bidding for responses, utility charges for access to information, and agent to agent information. Each of these strategies presents unique opportunities and challenges in the AI chat landscape.
Utility charges for access to information: Monetizing AI chat could lead to a shift from ad-based revenue models to utility charges for accessing information. While this could potentially make information less freely available, it could also help to reduce the influence of advertising on AI-driven systems. However, this model could create a barrier to entry for users unable or unwilling to pay for access to information. In turn, this might exacerbate existing inequalities and digital divides, limiting the potential benefits of AI Chat systems to a select few.
Bidding for plugins: When a user asks an AI Chat to perform a task, such as ordering a pizza, the AI must decide which service to use. Companies like Pizza Hut, Domino's, and local pizza shops could bid for priority in the AI Chat’s plugin selection, potentially influencing the user's choice. While this could create a competitive market and encourage businesses to improve their offerings, it may also exploit user trust and lead to an unhealthy focus on winning bids rather than providing the best service.
Bidding for response: Since AI Chat responses are generated through stochastic methods, outputs can vary each time. This variability creates an opportunity for companies to bid for priority in the responses provided by AI Chat, potentially exploiting user trust in the system. This strategy could lead to a "pay-to-play" environment where those who can afford to bid higher would dominate the AI Chat’s responses. As a result, the quality and relevance of information provided to users may be compromised, diminishing the usefulness of AI chat systems.
Agent to Agent communication: With the use of agents on the rise, in the future people will likely have their own personal agent. These personal agents are likely to talk to each other, make decisions, and filter information without you even knowing. It’s possible that companies will work with agent providers to make decisions in their favor or to let some information “leak” through pre-approved filters. Each of these could potentially reduce the perceived effectiveness of agents.
Implications
The above monetization strategies might sound far off but they become more likely everyday. Every few years a new marketing channel comes along and people have to figure out how to use it effectively. That’s happening right now with AI Chat. Just think about how Google has progressed in its platform. There are many ‘Sponsored’ ads that look like normal results. So much so that some of my relatives hadn’t even realized until I asked them, “Did you realize you just clicked on an ad?”. There’s a lot of interesting implications and applications from these methods of monetization. For instance:
Does user trust in the system erode with these methods? If the results continue to be accurate, interesting, or enjoyable, does it mean people will continue using the system even if outputs are being nudged in different directions?
How do brands find awareness? In a world where the main source of information for people is through a chat interface, how can new things be found that aren’t being explicitly asked for? Is the only option available to bid for responses and insert language or is there an “explore” mode that helps people learn about products and services they might need but wouldn’t know to look for?
Does it become a race to the bottom for how much margin someone is willing to accept? If someone is making an order through chat, how much is a company willing to pay for that customer? The order is immediate and the marketing funnel disappears. Are companies willing to pay much more for customer acquisition since the order is guaranteed and there really are no other marketing costs?
How can I track the information lineage that was used for this result? Is it possible that I can review the information that was used to supply the result, whether sponsored or not, so that I can dig deeper if needed? This is a form of verification that might be required by users if they feel their trust is being violated.
How can I alter the decision logic of AI Chat? If I want to order a pizza but I don’t like Domino’s, how can I ensure that AI Chat doesn’t order a Domino’s pizza even if they are bidding on me? How much explicit direction is required to be provided to the AI Chat? Will my preferences be saved for the future and how does that affect how brands bid on me in other ways?
As AI chat systems continue to advance and integrate into our lives, we will undoubtedly witness the ongoing evolution of monetization strategies that strike a balance between effectiveness and user acceptance. The future of AI chat systems will have profound implications on our trust, the expansion of our minds, and the digital landscape.