Policy experts highlight harms of agentic AI, urging regulatory action

 Policy experts highlight harms of agentic AI, urging regulatory action

Digital Regulatory Cooperation Forum’s Responsible AI Forum

RAID Director Ben Avison reports from the Digital Regulatory Cooperation Forum’s Responsible AI Forum in London

Old certainties about what we know to be real and true are dissolving, replaced by new, uncharted hazards as AI reshapes human interaction. At the Digital Regulatory Cooperation Forum’s Responsible AI Forum, perspectives on how to protect consumers in a world of increasingly agentic AI ranged from stark warnings to calls for proactive change.

According to Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy, the most urgent problem presented by agentic AI is that people don’t know who they are interacting with. “People will be faced with interactions in which they won’t know the company or the data behind it.”

These problems become exacerbated as we start to treat AI agents as people. “We are social creatures,” said Labour MP Tom Collins. “We’ve not started to fathom how vulnerable that makes us.”

So what kind of “person” are we interacting with? “A prejudiced, delusional, psychopathic, bullshitting yes-man!” he said.

“My fear is that getting from safety to where we are now is a long step. The concern I really harbour is that, when we realise how big that gap is, the tech companies will be seen as ‘too big to fail’”
Tom Collins, Labour MP

Karen Yeung, Professor at Birmingham University, asked: “Do we want to live in a world of fake people who stroke your ego and want to take your money?”

Deepfakes and misinformation are not new risks, but AI is making them much more dangerous, said Rocio Concha, Director of Policy and Advocacy at Which?. “We need to tackle this with urgency,” she said.

The traditional ways of protecting consumers no longer suffice, Yeung said. AI and digital technologies can and do malfunction regularly, in ways that would not be acceptable in physical products. And product safety standards are not sufficient to protect against digital harms, such as the use of AI-enabled glasses to photograph women surreptitiously.

Yeung pointed to how the regulation of other high-risk industries evolved, following a series of early mishaps. “Aviation and pharma have sociotechnical foundations of trust now,” she said. “We need to be clear about why regulation is not the enemy of good innovation.”

On the question of who is responsible, Collins said: “I have to stick my hand up – as government.” But what to do about it all? “My fear is that getting from safety to where we are now is a long step. The concern I really harbour is that, when we realise how big that gap is, the tech companies will be seen as ‘too big to fail’.”

But regulating AI is essential, he said. “It’s never too early and it’s never too late. We have to do it: we have to set that line of safety. We’re going to have to fight a lot of conflicting interests.”