European Union lawmakers have given final approval to a groundbreaking law governing artificial intelligence, positioning Europe as a leader in regulating this critical technology. The new law, known as the EU AI Act, is set to reshape how businesses and organizations use AI across various sectors, including healthcare and policing.
The law imposes blanket bans on certain “unacceptable” uses of AI, such as social scoring systems and biometric-based tools used to infer personal characteristics like race or political leanings. It also prohibits the use of AI to interpret emotions in schools and workplaces and restricts automated profiling intended to predict future criminal behavior.
Moreover, the EU AI Act outlines a separate category of “high-risk” uses of AI, particularly in areas like education, hiring, and access to government services. These high-risk applications will be subject to additional transparency and regulatory obligations.
Companies producing powerful AI models, such as OpenAI, will also face new disclosure requirements under the law. Additionally, all AI-generated deepfakes must be clearly labeled to address concerns about manipulated media and disinformation.
The legislation, approved by the European Parliament, is the culmination of a proposal introduced in 2021. It underscores the EU’s proactive approach to regulating AI, contrasting with the United States, which has yet to make significant progress on federal AI legislation.
The EU AI Act is set to take effect in approximately two years, signaling a significant step in safeguarding AI use while promoting innovation and ethical standards in Europe.
Meanwhile, the United States continues to grapple with the absence of comprehensive AI regulation, highlighting the divergent approaches between the two regions in managing the challenges and opportunities presented by artificial intelligence technologies.