The European Union (EU) has made significant progress in legislating artificial intelligence (AI) with a draft law, the A.I. Act, positioning it as a global frontrunner in providing a regulatory model for this rapidly advancing technology.

AI Regulation: An Urgent Need

This draft law is set to establish constraints on the riskiest applications of AI, such as facial recognition software, while mandating AI system manufacturers like OpenAI’s ChatGPT to reveal more about their data inputs. However, this legislation is still in its initial stages, and a final version is not anticipated until later this year.

Despite this, the 27-nation bloc’s move is in stark contrast to the slower pace of AI regulation in the United States and other major Western governments. The release of ChatGPT last year sparked further urgency in discussions surrounding the technology’s potential impacts on employment and society.

Simultaneously, policymakers worldwide, from Washington to Beijing, are attempting to contain the effects of this burgeoning technology that has even the early pioneers worried. For instance, the U.S. White House has proposed policies for testing AI systems before public release and safeguarding privacy rights. On the other hand, China has drafted rules necessitating strict adherence to its censorship rules for chatbot manufacturers and imposing more control over AI system data usage.

Legislative Challenges and the EU’s Risk-Based Approach

Regulating AI is undoubtedly a daunting task due to the rapid emergence of new capabilities. For instance, generative AI systems like ChatGPT, which generate text, images, and videos in response to prompts, were not adequately addressed in the EU law’s earlier versions. However, the current draft passed by the European Parliament would enforce new transparency requirements on generative AI, including the publishing of copyrighted material used for training and implementing measures to prevent illegal content generation.

As per Francine Bennett, acting director of the Ada Lovelace Institute, an organization that advocates for new AI laws, the EU proposal is a crucial landmark. She further highlighted the need for some form of regulation despite the inherent difficulties of regulating a fast-evolving technology.

The proposed law adopts a “risk-based” approach to AI regulation, focusing on applications that carry the highest potential for human harm, such as AI systems used in critical infrastructure, legal systems, public services, and government benefits. Prior to deploying these technologies in everyday use, creators would be required to conduct risk assessments akin to the drug approval process.

The Computer & Communications Industry Association, a tech industry group, emphasized the need for the European Union to avoid broad regulations that could stifle innovation. Boniface de Champris, the group’s Europe policy manager, stressed that Europe’s new AI rules should focus on clearly defined risks while offering flexibility for developers to deliver useful AI applications.

Ongoing Debates

Live facial recognition uses are a contentious topic, with the European Parliament voting to ban such uses. However, there remains debate over potential exceptions for national security and law enforcement purposes. The draft law would also prohibit companies from scraping biometric data from social media to create databases, a practice that drew attention after Clearview AI employed it.

Next Steps

Following Wednesday’s vote, representatives of the three branches of the European Union — the European Parliament, the European Commission, and the Council of the European Union — will negotiate the final version of the law, hoping to reach an agreement by year’s end.