UK boosts AI safety research with £8.5m to combat deepfakes and cyberattacks

 UK boosts AI safety research with £8.5m to combat deepfakes and cyberattacks

The UK government is stepping up its fight against AI threats with a significant investment in research. Tech Secretary Michelle Donelan MP announced £8.5 million ($10.8 million) at the AI Seoul Summit to fund new research projects focused on “systemic AI safety.”

This initiative aims to tackle a two-pronged challenge: protecting society from malicious attacks on AI systems (like deepfakes and data poisoning) and preventing actors from using AI for harm (e.g., advanced cyberattacks).

Shahar Avin at the government’s AI Safety Institute, in partnership with UK Research and Innovation and The Alan Turing Institute, will spearhead the research program. While applicants must be UK-based, collaboration with international AI safety experts is encouraged.

The urgency for this research is clear. The UK’s National Cyber Security Centre (NCSC) warned in January of a “near-certain” increase in cyberattacks, particularly ransomware, fueled by malicious AI use. This concern is echoed by a recent survey by, where 30% of information security professionals reported deepfake incidents in the past year.

However, the picture isn’t entirely bleak. The same survey found that 76% of respondents believe AI technology enhances information security, with 64% planning to increase their AI security budgets.

“This funding represents a major step forward in ensuring safe AI deployment,” says Christopher Summerfield, research director at the AI Safety Institute. “We must prepare for a future where AI is deeply integrated into our lives. This program will generate solutions and ensure good ideas translate into practical applications.”

The institute has already made progress in this direction. Their recent research discovered critical vulnerabilities in popular AI chatbots, highlighting the need for robust security measures.

This investment, coupled with the “historic first” agreement by 16 major AI companies in South Korea to develop AI models responsibly, demonstrates a growing global commitment to safe and beneficial AI development.