The European Parliament's Internal Market and Civil Liberties committees have approved a draft proposal for the AI Act, paving the way for what could become the world’s first artificial intelligence (AI) regulation.
The latest draft proposal, which was approved by a large majority in Thursday’s vote, includes amendments to the European Commission’s (EC) first draft of the AI Act, published in April 2021.
Amendments to the original legislative proposal are aimed at ensuring AI systems are overseen by humans, safe, transparent, traceable, environmentally friendly and not discriminatory. The amendments also call for a neutral and clear definition of AI, which could easily cover further developments in the space, and a risk-based approach to regulation.
‘We are on the verge of putting in place landmark legislation that must resist the challenge of time,’ says Brando Benifei, member of the European Parliament and co-rapporteur on the regulation. ‘It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening [and] to steer the political debate on AI at the global level.
‘We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.’
Tiered risks and use
When it comes to the use of AI systems, the EC has proposed a tiered and risk-based approach to regulation:
- Minimal or no risk
- AI with specific transparency obligations
- High risk
- Unacceptable risk.
Depending on which risk category the AI system falls into, the regulation would either permit its unrestricted use, permit it with some obligations, permit it only if the systems are subject to compliance with AI requirements (and after further assessment) or prohibit it.
Generative AI rules
The boom in systems such as ChatGPT and their rapid and fast-evolving development has been broadly welcomed around the world but has also raised eyebrows among technology experts and industry giants. In March, the letter issued by more than 1,000 technology savvies urging AI labs to stop the training of AI systems more powerful than GPT-4 dominated the headlines worldwide for weeks.
While the 2021 AI Act draft barely mentions chatbots in its 108-page document, the draft proposal adopted today defines rules for generative foundation models, like GPT.
Such systems ‘would have to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law,’ reads an official statement published by the European Parliament. ‘They would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.’
GPT-like systems would basically fall into the ‘AI with specific transparency obligations’ category. They would need to clearly disclose that the content was generated by AI and the model would have to be designed to prevent it from generating illegal content or publishing any copyrighted data. In the current proposal, chatbots are not deemed to pose high or unacceptable risks to society.
AI application vs AI development
The EC notes that the rules under the AI Act would not apply to research activities and AI components ‘provided under open-source licenses’.
‘The new law promotes regulatory sandboxes, or controlled environments, established by public authorities to test AI before its deployment,’ the statement reads.
It adds, however, that it aims to ‘boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights.’
The next step for the EU AI Act is endorsement by the whole European Parliament in a vote expected to take place between June 12 and June 15. Negotiations with the European Council on the final look of the law will follow the June vote.