Can regulators get ahead of the AI curve?

 Can regulators get ahead of the AI curve?

Regulators and policymakers from Europe and the US joined big tech to debate the challenges and opportunities of AI regulation at CPDP Conference. RAID Director Ben Avison caught some pertinent comments from expert speakers

Leonardo Cervera-Navas, Director, European Data Protection Supervisor (EDPS) said: “AI regulation is a new adventure, different from data protection; it is a new field. We are extremely excited about the opportunity to shape this regulation, to make this technology beneficial for humankind.

“We are learning as we speak. We need good legislation and we need it quick, because tech doesn’t wait. It should be straightforward, clear and easy to implement. We need strong supervision; here people’s lives are at stake. We need international standards; we can’t make the mistakes we have made in the past. We need to coordinate better between likeminded countries.”

Karolina Mojzesowicz, Deputy Head of Unit Data Protection, DG JUST, European Commission said: “In the area of this very fast-moving technological developments there is always a challenge how to balance between the need to frame innovation and not to stifle it; how to preserve values while allowing developments to arise so we can all profit.

“Our AI regulation is the first worldwide and comprehensive regulation of AI. We have a technology-neutral risk-based approach, which matches the need to prohibit deployment of AI in certain very targeted areas.

“We can’t allow ourselves adventures with such powerful technology without assurance that the values that we stand for are protected.”

 

An amplifier of issues

Julie Brill, Corporate Vice President for Global Privacy and Regulatory Affairs, and Chief Privacy Officer, Microsoft said: “Regulation is going to be necessary. I also think we already have legislation that needs to be enforced. It’s hard to develop appropriate balanced regulation if you aren’t thinking about the full landscape.

“Global society faces huge challenges: climate change, healthcare, food security – AI will be extremely helpful in solving these problems. There are one billion people on planet who have accessibility issues; generative AI as well as AI more generally has been able to produce solutions for people who were deaf and blind so they can navigate the world in ways that were impossible years ago.

“We are able to provide productivity tools that will enable you to do your work more efficiently, remove the drudgery and focus on the creativity. These productivity tools are amazing. We call this the democratisation of AI; it allows everyone to empower themselves.

“In many ways, AI is an amplifier of issues we are already facing. If we approach AI in a responsible way, if we understand the benefits and risks, we can move forward in a way that benefits society in a way that it’s hard to imagine.

 

Preserving democracy

Dierdre Mulligan, Deputy U.S. Chief Technology Officer for Policy, White House Office of Science and Technology Policy (OSTP) said: “We must mitigate against the serious risks of AI, including misinformation and discrimination. And there are questions about how AI is going to reorganise work, and how is that going to affect jobs and the economy.

“The President has stated how AI can deal with the world’s toughest challenges. If you combine the possibilities with the efforts on rights protection and risk mitigations, we can ensure that AI is deployed for the public good.

“Humanity has faced misinformation throughout our history, but AI is a gamechanger. Those same tools that improve our productivity in the workplace can also be weaponised.

“The US government has already taken several steps to try and promote responsible innovation in AI. We have an AI risk management framework. President Biden has placed an executive order to rule out bias in new technologies.”

“People think AI is a lawless zone. It isn’t; many of our most important civil and consumer rights protections apply to these new technologies.

“Regardless of the technical landscape, we need to make sure that we are addressing these issues in ways that are consistent with democracy. We are centred in rights and democratic values, no matter what the technology.

“We are seeking to work collaboratively with countries and democracies across the globe to ensure technology develops in that right way, to preserve democratic institutions. Those developing the world’s most powerful AI systems are American, and they reflect American values. We are competing with China, but we are not looking for conflict. We are seeking to work together where we can.”

Leonardo Cervera-Navas, Director, European Data Protection Supervisor (EDPS) said: “If we want to win in this race, we have to fight the battle in our land, which is the land of the human-centric approach to technologies. If we stick to that we will win the race for sure, and on that I don’t think there is any difference between Europe and the US – I think we share the same values and principles.”

Julie Brill, Corporate Vice President for Global Privacy and Regulatory Affairs, and Chief Privacy Officer, Microsoft said: “Just because you can do something doesn’t mean you should. We decided that certain products did not meet democratic values.

“We do need to have additional rules in place when it comes to AI. The EU AI Act has a lot of risk-based requirements; that’s going to be deeply important and could directly address synthetic voice as well as emotion detection.

 

Risk and innovation

Karolina Mojzesowicz, Deputy Head of Unit Data Protection, DG JUST, European Commission said: “We chose a risk-based approach, based on what you are going to use the AI solutions for. The question is, how do we design legislation so it diminishes the amplifications of problems – misinformation, inequality, discrimination, unfairness? This was the big question – where to draw the line, where to say certain deployments should be excluded ex ante. Transparency obligations needed to go further than GDPR because of black boxes and complexity – it is hard for individuals to understand what is going on.

“The two legislative regimes – GDPR and the AI Act – will complement each other.

“What are the unacceptable risks? What is the price we are willing to pay for increased productivity? Social scoring? Any regulation will be faced with the argument that you might be excluding something good. But standing up for certain values has a price that we are ready to pay. Our proposal mirrors what we stand for.”

Leonardo Cervera-Navas, Director, European Data Protection Supervisor (EDPS) said: “Is data protection and GDPR compatible with AI? Yes, provided that we engage in a constructive and innovative way. Data protection should be an enabler for new technologies and not an obstacle. However, we should not be afraid of banning and prohibiting things that are unacceptable.

“This dichotomy of data protection and innovation – I cannot accept that. This only applies if you approach data protection with a narrow mind. If you read the GDPR as it should be read – on the principles of a risk-based approach, accountability, data protection by design – it is extremely flexible.

“It is our responsibility to use the tools leading by example. We will show the world you can use AI ethically and in compliance with the law. We should avoid a technophobic approach to AI. Let’s avoid a lose-lose game where companies don’t invest enough to protect citizens and governments punish them.

 

What regulators need to learn

Julie Brill, Corporate Vice President for Global Privacy and Regulatory Affairs, and Chief Privacy Officer, Microsoft said: “I would say that regulators know generative AI, but really understanding the tech stack and what it takes to produce generative AI and the layers at which regulation should take place or not I think is something deeply complicated that regulators need to learn.

“Getting technical expertise into regulatory agencies is going to be a huge challenge over the coming years. There are a lot of challenges to getting this regulation done well, in a balanced way, and done quickly.

Dierdre Mulligan, Deputy U.S. Chief Technology Officer for Policy, White House Office of Science and Technology Policy (OSTP) said: “Generative AI is a tool that is being deployed in a wide range of industrial sectors. Some of them have different ways in which they regulate for safety, efficacy, bias and privacy.

“If you think about medical devices, we have a lot of rules in place at a federal level. Even when tools or inventions get an FDA stamp of approval, there is still ongoing monitoring of how they develop in the field, which is really important. Some of those same issues when with think about managing the risk of AI need to be front of mind. It’s not just do we manage risks; it’s when and where, and how do we maintain continuous oversight.”