AI’s balancing Act

 AI’s balancing Act

Legislating AI involves weighing up a range of human and industrial concerns, as speakers from UNESCO, Meta, the European Commission, national government and regulators explored at RAID Digital

 

When people think of the European Commission, they tend to think of legislation and regulation. But, as Juha Heikkilä, Adviser for Artificial Intelligence at the Directorate-General for Communications Networks, Content and Technology explained, “The European Commission is not just a regulatory body, it is a funding agency.”

He pointed out that the European Commission has increased its investment to €1 billion per year in AI. It is hoped that this, along with the AI Act, will boost the global competitiveness of the tech sector in Europe and its member states.

“We all want to tap the opportunities of AI,” said Andreas Hartl, Head of the Artificial Intelligence Division, Federal Ministry for Economic Affairs and Energy of Germany. “What we see is an interdependence of innovation and investment on one side with trust on the other side. There are two sides of the coin, and both are interlinked. We what we need is a clever combination of regulation with legroom for innovation.”

 

But what is AI and the AI Act?

Artificial intelligence is best explained in contrast to an algorithmic-based system.

“An algorithmic-based system is designed and encoded by humans, so humans know exactly how it is working,” Claude Castelluccia, Commissioner, Commission Nationale de l’Informatique et des Libertés (CNIL) explained.

“In AI, there are models are that are automatically generated from training data. The models are opaque; there is a lack of transparency. Usually they have good performance, but we cannot really explain why. That raises questions and ethical concerns.”

The AI Act, a draft of which has been published by the European Commission and is under review by the European Parliament and the European Council, is a global first in attempting to regulate AI horizontally – that is, across industry sectors.

The Act addresses the benefits and risks of AI, but reactions to the Act have varied. “One of the things that has surprised me is that there have been some reactions which have stemmed from assumptions of what there is in the AI Act, rather than what there really is,” said Heikkilä. “There is an overall feeling, rightly or wrongly, that the EU tends to regulate heavily.

“But if you look at the detail, at the proposal as it stands, there are some misconceptions about what is possible and what is not possible. On the other hand we have also been quite heartened by the feedback we have received. The risk-based approach in particular has been praised as the right way to do it.”

As well as safeguarding against risks, the legislation will also need to support the tech sector. So what does one of the world’s biggest technology companies make of the Act’s potential?

“We are dependent on good regulation to develop the innovation of the future, and the EU has an opportunity to provide the incentives, the certainty and the clarity that is needed for innovators to develop their businesses and for researchers to invest in the next cutting-edge technologies,” said Cecilia Álvarez, Director of Privacy Policy Engagement at Meta.

Álvarez identified three positive approaches relevant to the AI Act: risk-based, harmonised and evidence-based.

“With respect to the risk-based approach, we believe that we need rules that are aimed at controlling the greatest likely threats, as opposed to preventing every theoretical harm – as well as addressing the cost of opportunity, which is also a risk – so to really understand the benefits that the use of AI in the relevant context may provide. This would allow us to collectively ensure that AI’s greatest risks are addressed, and we do not lose the benefits enabled by flourishing innovation in the EU.

“We also believe that AI regulation should be harmonised across jurisdictions, starting with the EU. Regulation of AI should aim for coherence, with internationally consistent standards that are based on widely accepted principles. We believe this will ensure the clarity and certainty that is needed to attract innovators in the EU from an entrepreneurial point of view but from a research point of view as well.

“Regulation that is trying to regulate technology must be evidence-based and take advantage of policy prototyping projects that can provide a safe testing ground for experimenting with different policy approaches and assessing their impact before they are enacted.

“There are many things we do not know; this is about the future, and therefore we need to be able for responsible innovation to have these kinds of spaces that are safe in order to experiment before going to market. This hopefully will be able to mitigate unintended consequences and ensure that we are moving from principles to practice effectively.”

 

Sandboxes

Crucial to this is the use of sandboxes, which enable an AI system’s legal implications to be tested in a protected environment before its release on the market. “This enables you to assess implications that are not immediately obvious,” said Heiikkilä.

“We are very pleased to see the EC support for sandboxes, it is a positive concept,” said Álvarez. “It is not only necessary to have a concept; we also need to have a system that identifies the advantages for the organisations joining the sandbox. We also need to provide protection for the organisations that are joining the sandbox.

“This is not only for organisations to be able learn from the experience, but also for policymakers to feed back into the policy process.”

The AI Act’s risk-based approach means that the majority of systems on the market are unaffected by the AI Act. But how the Act would apply to biometric identification is an area of particular concern.

“There is a lot of attention to issues like biometric identification, which a very sensitive issue, but that’s not really surprising,” said Heiikkilä.

Biometric recognition is under debate in Germany. “The coalition agreement of the new majority says that biometric recognition should be banned,” said Hartl. “There are still many details – we need a balance of security, liberty and civil rights.”

 

An integrated approach

As if regulating AI and its applications across industries weren’t challenging enough, it is also important to consider how AI legislation fits with other sectors of technology regulation – in particular, data protection.

“AI is very often built on personal data,” said Castelluccia. “The risk of AI compared with an algorithmic-based system is that data used to generate those models can be manipulated, or just be of bad quality. We also need to be sure not to create biases due to the training data. The bias could be the way you collect and sample the data. The systems can leak personal data that can create privacy issues.”

It is also crucial to be sure the systems are secure and robust.

“One of the biggest challenges is how to evaluate the performance of an AI system across different populations. We need to be sure that systems do not discriminate; but in order to do that we need to that we need to get datasets.

“There have been many reports on ethics in AI, but very few practical frameworks. If we want to help start-ups, we need to ensure we are publishing methodologies and rules that will help companies to be sure their systems are ethical and compliant with different regulations.”

This holistic approach is also supported by Meta. “I believe AI cannot work in isolation – not only with respect to GDPR but also the charter of human rights and freedoms,” said Álvarez.

“We should concentrate on what is common, and there is a lot.”

Just as GDPR has set global standards in data protection, the European Commission hopes the AI Act with lead the way globally. But cross-border agreements will need to be forged.

With reference to the UK Government strategy for international transfers post-Brexit, Álvarez said “It is crucial for the development of AI – and many other societal and individual goals – to have a free flow of data with trust.”

Hartl highlighted a proposal on biometrics from the White House on an AI Bill of Rights that has similar priorities to Germany. “It is good to see that nations sharing the same values should align their core regulatory approach,” he said.

“Five to ten years is a long time in AI,” said Heikkilä. “Things may not be constant. Things may change. We will see if the AI Bill of Rights goes in the same direction as we have been travelling. We see in the US there is a lot of common ground in the values space.

“The AI Act will be developed by means of harmonised standards. We are seeking common ground in technical standards.

“The point is about ethics and how that will be taken into practice. In the European Commission we have a high-level group on trustworthy AI. We have subscribed to the OECD recommendation on AI, so we have the common ground there.

Álvarez is optimistic that the AI Act can raise the bar globally. “Through good regulation and mechanisms to keep it updated , in collaboration with expert stakeholders, we have the chance in the EU to set standards that may drive or greatly influence best practice across the globe. Democratic societies must undertake a joint effort built on democratic norms and values.”

This reinforced the words of Gabriela Ramos, Assistant Director-General for Social and Human Sciences, UNESCO in her opening address to the panel, in which she said: “Taking a supranational perspective, as RAID conference does, is absolutely the right approach.”

This article, written by RAID Director Ben Avison, is based on the panel AI in Action, moderated by Milly Doolan of the Core Team at European AI Forum and Managing Director of EuroNavigator, at RAID Digital in May 2022