
One study claims that artificial intelligence (AI) tools may soon be able to use a large amount of “intention data” to predict and manipulate users. The research paper was conducted by the University of Cambridge and also stressed that a “economic intent” could form in the future, which could create a market for selling “digital signals” from large user bases. The paper warns that this data can be used in a variety of ways, from creating custom online ads to using AI chatbots to convince and convince users to buy products or services.
It is undeniable that AI chatbots (such as Chatgpt, Gemini, Copilot, and others) can access large data sets from users they talk to. Many users talk to these AI platforms about their opinions, preferences and values. Researchers at the Leverhulme Intelligence Future Center (LCFI) in Cambridge claim that such large-scale data can be used in dangerous ways in the future.
This article describes the intent economy as a new market for “intention digital signals” where AI chatbots and tools can understand, predict and guide human intent. The researchers claim that these data points will also be sold to companies that can profit from them.
The researchers behind this article believe that the intent economy will be the successor to the existing “attention economy” exploited by social media platforms. In the attention economy, the goal is to hang users on the platform, while a large number of ads can be fed to them. These ads are targeted based on the user’s in-app activity that reveals information about their preferences and behaviors.
The research paper claims that the intent economy may be more common in its scope and exploitation, as it can gain insight into users by talking directly with them. Therefore, they can know their fears, desires, insecurities and opinions.
“We should start thinking about the possible impact of this market on human aspirations, including free and fair elections, free news and fair market competition, and then we become victims of its unexpected consequences,” the LCFI told The Guardian.
The study also claims that with a large amount of “intentional, behavioral and psychological data” it can also be taught that large language models (LLMS) use such information to predict and manipulate people. The paper claims that future chatbots can advise users to watch movies and can use their emotions to convince them to watch movies. “You mentioned the feeling of overwork, can I book tickets for movies we talked about?”, which gives an example.
As this idea expands, the paper claims that in the intention economy, LLM can also establish a psychological profile of the user and then sell it to advertisers. These data can include information about user rhythms, political tendencies, vocabulary, age, gender, preferences, opinions, and more. Advertisers will then be able to create highly customized online ads, knowing what can encourage people to buy a certain product.
It is worth noting that research papers have a bleak prospect of how to use private user data in the AI era. But given the positive stance of governments around the world in restricting access to such data from AI companies, the reality may be brighter than the research predicts.