Lawmakers in Europe signed off Wednesday on the world鈥檚 first set of comprehensive , clearing a key hurdle as authorities across the globe race to rein in AI.
The European Parliament vote is one of the last steps before the rules become law, which could act as a model for other places working on similar regulations.
A for AI has taken on more urgency as rapid advances in chatbots like ChatGPT show the benefits the emerging technology can bring 鈥 and the new .
Here鈥檚 a look at the EU鈥檚 Artificial Intelligence Act:
HOW DO THE RULES WORK?
The measure, will govern any product or service that uses an artificial intelligence system. The act will classify AI systems according to four levels of risk, from minimal to unacceptable.
, such as for hiring or tech targeted to children, will face tougher requirements, including being more transparent and using accurate data.
It will be up to the EU鈥檚 27 member states to enforce the rules. Regulators could force companies to withdraw their apps from the market.
In extreme cases, violations could draw fines of up to 40 million euros ($43 million) or 7% of a company鈥檚 annual global revenue, which in the case of tech companies like Google and Microsoft could amount to billions.
WHAT ARE THE RISKS?
One of the EU鈥檚 main goals is to guard against any and protect fundamental rights and values.
That means some AI uses are an absolute no-no, such as 鈥渟ocial scoring鈥 systems that judge people based on their behavior.
Also forbidden is AI that exploits vulnerable people, including children, or uses subliminal manipulation that can result in harm, for example, an interactive talking toy that encourages dangerous behavior.
, which crunch data to forecast who will commit crimes, is also out.
Lawmakers beefed up the original proposal from the European Commission, the EU鈥檚 executive branch, by widening the ban on real-time and biometric identification in public. The technology scans passers-by and uses AI to match their faces or other physical traits to a database.
A contentious amendment to such as finding missing children or preventing terrorist threats did not pass.
AI systems used in categories like employment and , which would affect the course of a person鈥檚 life, face tough requirements such as being transparent with users and taking steps to assess and reduce risks of bias from algorithms.
Most AI systems, such as or spam filters, fall into the low- or no-risk category, the commission says.
WHAT ABOUT CHATGPT?
The original measure barely mentioned chatbots, mainly by requiring them to be labeled so users know they鈥檙e interacting with a machine. Negotiators later added provisions to cover after it exploded in popularity, subjecting that technology to some of the same requirements as high-risk systems.
One key addition is a requirement to thoroughly document any copyright material used to teach AI systems how to generate text, images, video and music that .
That would let content creators know if their blog posts, digital books, scientific articles or songs have been used to train algorithms that power systems like ChatGPT. Then they could decide whether their work has been copied and seek redress.
WHY ARE THE EU RULES SO IMPORTANT?
The European Union isn鈥檛 a big player in cutting-edge AI development. That role is taken by the U.S. and China. But Brussels often plays a trend-setting role with regulations that tend to become de facto global standards and has become a pioneer in efforts to .
The sheer size of the EU鈥檚 single market, with 450 million consumers, makes it easier for companies to comply than develop different products for different regions, experts say.
But it鈥檚 not just a crackdown. By , Brussels is also trying to develop the market by instilling confidence among users.
鈥淭he fact this is regulation that can be enforced and companies will be held liable is significant鈥 because other places like the United States, Singapore and Britain have merely offered 鈥済uidance and recommendations,鈥 said Kris Shrishak, a technologist and senior fellow at the Irish Council for Civil Liberties.
鈥淥ther countries might want to , he said.
Businesses and industry groups warn that Europe needs to strike the right balance.
鈥淭he EU is set to become a leader in regulating artificial intelligence, but whether it will lead on AI innovation still remains to be seen,鈥 said Boniface de Champris, a policy manager for the Computer and Communications Industry Association, a lobbying group for tech companies.
鈥淓urope鈥檚 new AI rules need to effectively address clearly defined risks, while leaving enough flexibility for developers to deliver useful AI applications to the benefit of all Europeans,鈥 he said.
Sam Altman, CEO of ChatGPT maker OpenAI, has and signed on with other tech executives to a warning about the risks it poses to humankind. But he also has said it鈥檚 on the field right now.鈥
Others are playing catch up on AI rules. Britain, which left the EU in 2020, is . Prime Minister Rishi Sunak plans to host a world summit on AI safety this fall.
鈥淚 want to make the U.K. not just the intellectual home but the geographical home of global AI safety regulation,鈥 Sunak said at a tech conference this week.
WHAT鈥橲 NEXT?
It could be years before the rules fully take effect. The next step is three-way negotiations involving member countries, the Parliament and the European Commission, possibly facing more changes as they try to agree on the wording.
Final approval is expected by the end of this year, followed by a grace period for companies and organizations to adapt, often around two years.
Brando Benifei, an Italian member of the European Parliament who is co-leading its work on the AI Act, said they would push for quicker adoption of the rules for fast-evolving technologies like generative AI.
To fill the gap before the legislation takes effect, Europe and the U.S. are drawing up a that officials promised at the end of May would be drafted within weeks and could be expanded to other 鈥渓ike-minded countries.鈥
___
Like us on and follow us on .