In her recent op-ed for the Financial Times, Vera Jourova, a European Commission representative, underscores the critical role of labelling AI-generated content, coinciding with the European Union’s unveiling of groundbreaking legislation aimed at regulating artificial intelligence (AI) for safe and ethical use. The legislation responds to the pressing need for a comprehensive rulebook to govern computer processes surpassing human capacities. Taking inspiration from Isaac Asimov’s foresight, the laws articulate a framework preventing AI from causing harm to humans while instilling accountability among developers.
The legislation classifies AI applications based on risk, imposing stringent requirements on high-risk domains such as medical devices and voter behaviour influence. Explicit prohibitions on unacceptable practices, including biometric sorting by religion or race and emotion recognition in workplaces, safeguard fundamental human rights.
To nurture safe AI development, the EU envisions providing supercomputers to European AI startups and small to medium-sized enterprises (SMEs), accompanied by an annual commitment exceeding €1 billion for AI research. Pending confirmation by member states and the European Parliament, the legislation anticipates full implementation by 2026.
The EU’s proactive stance aims to position Europe as a central hub for safe AI, underscoring the significance of preserving human rights, truth, and intellectual property amidst transformative technological strides.
During her keynote speech at RAID 2023 in Brussels on September 26, Jourova emphasised, “No matter how fast technologies evolve, they must always serve a human purpose and leave no one behind.”