The challenges of regulating AI and data across jurisdictions

 The challenges of regulating AI and data across jurisdictions

Elizabeth Denham CBE, Chair-Designate, Jersey Data Protection Authority chaired the opening panel at GPA (Global Privacy Assembly) Jersey with Cari Benn, Associate General Counsel, Microsoft; Teki Akuetteh, Executive Director, Africa Digital Rights’ Hub LBG; Boniface de Champris, Senior Policy Manager, Computer & Communications Industry Association; and Miriam Wimmer, Director of the National Data Protection Authority of Brazil (ANPD)

Elizabeth Denham CBE, Chair-Designate, Jersey Data Protection Authority chaired the opening panel at GPA (Global Privacy Assembly) Jersey with Cari Benn, Associate General Counsel, Microsoft; Teki Akuetteh, Executive Director, Africa Digital Rights’ Hub LBG; Boniface de Champris, Senior Policy Manager, Computer & Communications Industry Association; and Miriam Wimmer, Director of the National Data Protection Authority of Brazil (ANPD)

These are complex, fast-changing times for the innovation, deployment and regulation of technology around the world.

“We have cross-regulatory tension and ambitions to regulate AI. In the current environment, privacy leaders are expected to do more, often with less,” said Elizabeth Denham CBE in her opening remarks to the GPA Jersey panel, entitled Does AI Compliment Data Protection or Is It in Conflict? “Add to this the clamour for governments and industry to adopt AI solutions and not be left behind.”

“And yes, there are stark differences and different challenges in the state of responsible AI around the world, beginning with culture and with the state of maturity of data protection and privacy legislation, arguably the foundational component for responsible AI.”

The international landscape
Teki Akuetteh, Founder & Executive Director Africa Digital Rights’ Hub LBG said: “When it comes to the adoption and the use of technologies, almost all African countries are at the helm of it, and so we adopt these technologies almost as quickly as they come out. And what this does is that it poses a significant challenge to the regulator. Fortunately, most of our data protection laws are very principle-based. But what you then need is a very proactive regulator that is well resourced.”

Miriam Wimmer, Director of the National Data Protection Authority of Brazil (ANPD) said: “I think the challenge is that countries are legislating at different speeds. So we have the AI Act, which is of course important. But in Brazil, we also have legislation which may have nuances, and the same goes for other countries. So I think it would be really useful if we could, at the international level, perhaps move to more concrete recommendations that could guide countries as they seek more concrete implementation domestically.”

A principle-based approach
Microsoft is a global leader in the development and deployment of AI.

“In terms of how we think about global regulation and our global presence, one way that we think about it is from a geographic scope,” said Cari Benn, Associate General Counsel, Microsoft.

“There are many countries around the world that have privacy laws that have similar principles that don’t necessarily have the same requirements. And then we have jurisdictions, including the US at a federal level, that don’t have privacy requirements.

“And so our approach at Microsoft is very principle-based, where our principle is that privacy is a fundamental human right and, regardless of where you live around the world, you have those rights to be able to see what data Microsoft has about you, to be able to delete it, to be able to take a copy of it and to make meaningful choices about what we collect and what we do with your data, whether you’re an enterprise customer or using some of our consumer products.

“And because privacy is a fundamental human right, we do that whether or not the law gives you protections in your jurisdiction.

Physical, digital and app infrastructure
“The other way that we think about scaling worldwide is technological scale,” said Benn. “There are three layers of the AI infrastructure: the first layer is the physical infrastructure; the second layer is the digital infrastructure; and the third layer is the app infrastructure.

“So at the physical infrastructure layer, we think about how do we provide energy to data centres that are powering AI around the world, how do we do that in a way that’s sustainable? How do we build chips or encourage other companies to build chips that are able to power these large generative AI, large language models?

“At the digital infrastructure layer, that’s where those foundation models that are developed by OpenAI, by Google, by Microsoft and other companies all sit. And that infrastructure means that we have large language models that we provide, but we also enable developers around the world to develop their own language models.

“We have a service called Azure, that hosts about 20,000 language models developed by other organisations and people around the world. We also encourage developers globally to build on that app layer so that organisations and people have AI that’s meaningful to them.

“Across all of those three layers, we need to think about how privacy and data protection globally makes sure that we’re providing not only the requirements under the law, but really what is the best experience for our company and for our customers and for people to be able to make sure that they have faith in not only what Microsoft produced in those layers, but that the ecosystem globally across the infrastructure is trustworthy and that you know that you know that you can put data into it, invest in that infrastructure and make sure that you are safe and trusted.”

The stakeholders in tech ecosystem include not just companies and governments, but also society.

“There are roles for large companies and organisations to play because of the scale of AI, but there certainly is a strong partnership with governments and then individuals to make sure that globally, everyone can participate in this new infrastructure,” said Benn.

“There is diversity in terms of our approach to how to regulate AI, and I really see an opportunity to do that in a way that provides a lot of education, in terms of how the technology works, which I think will lead to better outcomes in terms of taking advantage of the opportunities in AI.”

Europe’s “patchwork of regulations”
Boniface de Champris, Senior Policy Manager, Computer & Communications Industry Association cited the Mario Draghi’s report on the future of European competitiveness to highlight the challenges Europe faces. “Draghi says that, with the world of the cusp of an AI Revolution, Europe cannot afford to remain stuck in the middle with technologies and industries of the previous century.

“And it’s really fundamental for Europe to not only innovate in artificial intelligence, but also integrated and benefit from this technology by integrating it into its legacy industries, which will really benefit from artificial intelligence.

“And today it’s a very complex and often inconsistent regulatory environment and patchwork of regulations in Europe makes it very difficult for companies not only to develop and innovate artificial intelligence, but also to integrate and use AI.”

Transparency, fairness and data stewardship
Denham asked Benn: “Do you think data protection and AI regulation are aligned or are they in conflict?”

“I would say directionally and at a high level, privacy and AI in my mind are very aligned,” said Benn. “There are concepts in privacy that we have worked with for a long time that are similar to what we’re seeing in the EU AI act and some other EU AI legislation.

“Those things are transparency – so being clear about your practices; fairness – making sure that the way that you are developing or deploying AI is fair to the people that are using them; and then also data stewardship – so in the data life cycle from collection through deletion, how do you make sure that you are honouring people’s data, keeping it secure, making sure that you’re not using it for unexpected purposes?

“Responsible AI security, across different legal domains, that should be directionally where we’re headed.”