news

EU politicians back new rules on AI ahead of landmark vote | Technology News

World’s first AI Act will aim to set guardrails on the technology and protect ‘fundamental rights’.

European politicians in two key committees have approved new rules to regulate artificial intelligence (AI) ahead of a landmark vote that could pave the way for the world’s first legislation on the technology.

On Tuesday, two committees in the European Parliament – on civil liberties and consumer protection – overwhelmingly endorsed the provisional legislation to ensure that AI complies with the protection of “fundamental rights”.

A vote in the legislative assembly is scheduled for April.

The AI Act will aim to set guardrails on a technology being used in several industries, ranging from banking and cars to electronic products and airlines, as well as for security and police purposes.

“At the same time, it aims to boost innovation and establishing Europe as a leader in the AI field,” the parliament said in a statement.

The law is widely seen as a global benchmark for governments hoping to take advantage of the potential benefits of AI while guarding against risks that range from disinformation and job displacement to copyright infringement.

The legislation, which was proposed by the European Commission in 2021, had been delayed by divisions over the regulation of language models that scrap online data and the use of AI by police and intelligence services.

The rules will also regulate foundation models or generative AI like the one built by Microsoft-backed OpenAI, which are AI systems trained on large sets of data, with the ability to learn from new data to perform various tasks.

Eva Maydell, the MEP for Tech, Innovation and Industry, called the approval on Tuesday a “result we can be proud of” and one “that encourages social trust in AI while still giving companies the space to create and innovate”.

Deirdre Clune, the MEP for Ireland South, said it was “another step closer to having comprehensive rules on AI in Europe”.

This month, European Union countries endorsed a deal reached in December on the AI Act, which aims to better control governments’ use of AI in biometric surveillance and how to regulate AI systems.

France secured concessions to lighten the administrative burden on high-risk AI systems and offer better protection for business secrets.

The act requires foundation models and general-purpose AI systems to comply with transparency obligations before they are put on the market. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.

Big tech has remained guarded about the requirements and their potential effect on the law of innovation.

Under the law, tech companies doing business in the EU will be required to disclose data used to train AI systems and carry out testing of products, especially those used in high-risk applications such as self-driving vehicles and healthcare.

The legislation bans indiscriminate scraping of images from the internet or security footage to create facial recognition databases, but includes exemptions for the use of “real-time” facial recognition by law enforcement to investigate terrorism and serious crimes.

اترك تعليقاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *

زر الذهاب إلى الأعلى