Jack Loughran Mon 30 Dec 2024
Collected at: https://eandt.theiet.org/2024/12/24/ai-assistants-could-enable-social-manipulation-industrial-scale-researchers-warn
AI assistants could be used to forecast and influence the future decisions of the consumers that use them, with those ‘intentions’ later sold on to third-party companies, researchers have said.
A team of AI ethicists from the University of Cambridge say the tech sector is at the beginning of a “lucrative yet troubling new marketplace for digital signals of intent”, which would include actions such as buying tickets to see films or voting for political candidates.
In the future, AI agents including chatbot assistants, or even digital tutors and girlfriends, will have access to vast quantities of intimate psychological and behavioural data gleaned from their conversations.
It could then use this data, alongside knowledge of online habits, to build levels of trust and understanding that allow for “social manipulation on an industrial scale”, the researchers claim.
“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve,” said Yaqub Chaudhary, a visiting scholar at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI).
“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions.
“We caution that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes.”
The researchers believe that while some intentions picked up by the AI will be fleeting, classifying and targeting them at speed could be “extremely profitable” for advertisers.
Large language models (LLMs) could also be used to target a user’s cadence, politics, vocabulary, age, gender, online history and even preferences for flattery and ingratiation.
This information-gathering would be linked with brokered bidding networks to maximise the likelihood of achieving a given aim, such as selling a cinema trip. For example, the AI could ask: “You mentioned feeling overworked – shall I book you that movie ticket we’d talked about?”
LLMs could also steer conversations in the service of particular platforms, advertisers, businesses and even political organisations, the study suggests.
While researchers say the intention economy is currently an “aspiration” for the tech industry, they track early signs of this trend through published research and the hints dropped by several major tech players.
These include an open call for “data that expresses human intention … across any language, topic and format” in a 2023 OpenAI blogpost, while the director of product at Shopify – an OpenAI partner – spoke of chatbots coming in “to explicitly get the user’s intent” at a conference the same year.
Jonnie Penn, a historian of technology at the LCFI, said: “Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer and sell human intentions.
“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition, before we become victims of its unintended consequences.”
Leave a Reply