Gary Marcus: AI, Catastrophe and the Coming Reality Check

 Gary Marcus: AI, Catastrophe and the Coming Reality Check

Gary Marcus and Andy Coulson recording Crisis What Crisis podcast at RAID 2025

At RAID 2025 in Brussels, psychologist and cognitive scientist Gary Marcus sat down with journalist Andy Coulson for a live recording of the Crisis What Crisis podcast. What followed was a sharp, wide-ranging conversation about AI’s existential risks, misinformation, the data “heist” of the century – and why the next crisis in artificial intelligence may be economic rather than existential.

From Doom to Dystopia

Asked about the idea that AI might bring about humanity’s extinction – the infamous P(doom) – Marcus was quick to separate hyperbole from realism.

“Humans are resilient,” he said. “If the AIs came for us, we would fight back. Extinction is pretty unlikely. The possibility of catastrophe is fairly high – and the probability of dystopia is quickly approaching 100%.”

By “catastrophe”, Marcus means an event in which AI contributes to the deaths of 1% of the global population through escalation of conflict, bioweapons, or systemic misinformation. “Think of COVID,” he added. “It was catastrophic but not extinction-level. AI could play a catalytic role in something like that.”

The Machine Guns of Disinformation

Marcus described generative AI as “the machine guns – sometimes the nukes – of disinformation.”

“Misinformation is ancient,” he said, “but generative AI is perfect at mimicking human tone, journalism, authority with no moral fibre or awareness of reality.”

He warned that authoritarian regimes are already exploiting these tools, while Western societies remain complacent. “The only way to fight misinformation is to care,” he said bluntly. “And right now, I’m not sure enough people do.”

The antidote, he argued, begins with AI literacy – teaching critical thinking from a young age.

“When you have systems that can fluently answer any question — and make things up while doing so — critical thinking becomes essential.”

The Great Data Heist

Turning to the question of data and intellectual property, Marcus was equally forthright.

“The AI companies are trying to replace everybody,” he said.

He criticised the wholesale use of copyrighted material for model training without consent or compensation. “We’re giving away the world’s intellectual property to companies worth half a trillion dollars,” he said. “It’s the greatest theft of all time.”

The AI industry, Marcus argued, has shifted dramatically since 2020 – from scientific curiosity to profit-driven data laundering. “It’s no longer about understanding intelligence,” he said. “It’s about monetising every scrap of data people produce.”

The Coming AI Bubble

If AI’s social and ethical impact worries Marcus, so does its economic sustainability.

“Large language models are autocomplete on steroids,” he said. “They don’t understand the world and that limits them.”

Despite the hype, he noted, most companies experimenting with LLMs see little return. “An MIT study found 95% of corporate pilots didn’t deliver ROI,” he said. “Everyone bet on GPT-5 being a miracle. It’s late and not as good as expected.”

Marcus warned that the current wave of investment mirrors past tech bubbles. “We’ve seen this before with expert systems in the 1980s,” he said. “Few survivors, massive write-offs.”

He predicted that valuations could fall sharply:

“There’s no moat. Everyone’s building the same thing. It’ll become cheap as chips – but unprofitable. OpenAI valued at $500 billion doesn’t make sense. We may be looking at a WeWork moment for AI.”

The Missing Regulation

Marcus has long advocated for an international AI regulatory body – a kind of “pre-flight check” for high-risk models, similar to the FDA for drugs.

After testifying before the U.S. Senate in 2023, he left optimistic. “It was bipartisan. Everyone agreed we’d been too slow with social media,” he recalled. “And then… nothing happened.”

Today, he says, the political will has evaporated. “We have no mechanism to stop a company releasing a system that could, say, enable bioweapons. Each new release raises the risk  and there’s nothing in place to prevent it.”

A Cultural Shift — and a Note of Hope

Despite his dark forecasts, Marcus sees glimmers of hope – especially among the young.

“I saw kids online saying, instead of ‘that’s BS,’ they say ‘that’s AI.’ That’s progress,” he smiled. “When nine-year-olds start calling out nonsense as AI, maybe the next generation won’t be so easily snowed.”

Download the full episode on Apple podcast or spotify

About Gary Marcus

Gary Marcus is a cognitive scientist, author, and founder of several AI startups, including Geometric Intelligence (acquired by Uber). A prominent critic of deep learning orthodoxy, he is author of Taming Silicon Valley and writes a widely read Substack with over 80,000 subscribers.