What Are the Regulatory Challenges Posed by Generative AI?

 What Are the Regulatory Challenges Posed by Generative AI?

Generative AI systems, which can produce human-like text, images, and design, have become widely used in society. Generative AI companies and apps like ChatGPT, Jasper and Stability AI have been attracting increased investments. Alongside the rising adoption of such tech are the concerns it raises in the AI governance community.

A particular concern is that generative AI can create and promote false and misleading content. Ayima’s insights on fake news and advertising explain that misinformation can run rampant across adverts, which can cause potential harm to users by playing with their insecurities and beliefs. As such, social media giants have been called upon to police media that may contain false information. To do so, algorithms are being used to comb through and evaluate millions of live adverts on Facebook, Google, and Twitter.

Of course, the task of circumventing fake news doesn’t solely depend on these platforms. Governments also play a hand in nipping this in the bud. One of many evolving challenges facing regulators is the need to develop and standardise notification mechanisms that will allow users to report biased, fake, or harmful outputs produced by generative AI.

Pinning down a definition
Perhaps the most apparent issue in regulating generative AI is that the tech may evade a clear-cut definition for regulations or legislation. A 2022 Carnegie Endowment article on AI regulation shows that human-based definitions (“the ability of machines to perform tasks that normally require human intelligence”), rather than capability-based ones (AI described as “predictions, recommendations, or decisions”), can help regulators to understand how AI systems impact people and communities. In turn, this can allow them to create flexible policies to accommodate the risks that come with generative AI’s evolution.

As it stands, however, there’s still much to be desired in terms of liability-based regulatory schemes. The referenced Carnegie article recommends that policymakers forgo a precise definition and directly target the anticipated threats that generative AI can bring about.

Copyright violations
Another challenge that surrounds the regulation of generative AI is copyright violations. Generally, the datasets behind generative AI models are gathered from the internet without the consent of living artists or creators. Daniel Braga, a member of the White House Task Force for AI Policy, shares in the referenced World Economic Forum report that: “If these models have been trained on the styles of living artists without licensing that work, there are copyright implications.”

Notably, a class action suit has been filed against Github, Microsoft, and OpenAI. This is because the companies reportedly permitted Copilot, an AI-based code generator, to reproduce codes that may be used for commercial purposes. As a result, the code’s original authors are left unrecognised and uncompensated. Ultimately, different artists can find it difficult to opt out of having their works included in generative AI models’ training datasets.

Integration with data protection regulation
Regulating generative AI is closely linked to other sectors of technology regulation, mainly data protection. In our previous post, Commissioner Claude Castelluccia stated that AI models often hinge on personal data. He mentioned that “the risk of AI compared with an algorithmic-based system is that data used to generate those models can be manipulated, or just be of bad quality. We also need to be sure not to create biases due to the training data. The bias could be the way you collect and sample the data. The systems can leak personal data that can create privacy issues.”

Currently, numerous European data protection agencies have enforced hefty fines against businesses that use AI-enabled biometric analysis tools trained with images from online sources without legal basis. However, as pointed out by a 2023 Osborne Clarke write-up on liability issues, the AI regulation currently being discussed in the European Union generally focuses on the safety risks for people’s fundamental rights and freedoms. It doesn’t include provisions on dealing with IP and personality rights in training data, which is complicated by the cross-jurisdictional nature of online data scraping and its usage.

Despite the capabilities of generative AI, the regulatory issues that it poses underscore the fact that this AI model is not without its risks.

Read through our latest posts at RAID for more insights on the future of tech regulations.

 

Exclusively written for raid.tech by Juliet Baldwin