Europe takes aim at ChatGPT, which could soon be the West’s first AI law. here it means


Privately held companies have been left to develop AI technology at breakneck speed, giving rise to systems like Microsoft-backed OpenAI’s ChatGPT and Google’s Bard.

Lionel Bonaventure | AFP | Getty Images

A key committee of MPs in the European Parliament has approved the first artificial intelligence regulation of its kind – bringing it a step closer to becoming law.

The approval marks a landmark development in the race among authorities to tame AI, which is developing at a staggering pace. The law, known as the European AI Act, is the first legislation for AI systems in the West. China has pre-developed draft rules Designed to manage how companies develop Generative AI products like ChatGPT.

The law for regulating AI takes a risk-based approach, where the liabilities for a system are proportional to the level of risk it poses.

The rules also specify requirements for providers of so-called “foundation models” such as ChatGPT, which have become a significant concern for regulators, given how advanced they are becoming and fearing they may also displace skilled workers. Will go

What are the rules?

The AI ​​Act classifies applications of AI into four levels of risk: unacceptable risk, high risk, limited risk, and minimal or no risk.

Unacceptable risk applications are restricted by default and cannot be deployed in blocks.

they include:

  • AI systems that use subliminal techniques, or manipulative or deceptive techniques, to distort behavior
  • AI systems exploit the weaknesses of individuals or specific groups
  • Biometric classification system based on sensitive features or characteristics
  • AI systems used for social scoring or trustworthiness assessment
  • AI system used for risk assessment predicting criminal or administrative offenses
  • AI system builds or expands facial recognition database through untargeted scraping
  • AI systems harnessing emotions in law enforcement, border management, the workplace and education

Several MPs called for the measures to be made more expensive to ensure ChatGPT was covered.

To that end, requirements have been imposed on larger language models and generative AI, such as “base models”.

Developers of Foundation models will be required to implement security checks, data governance measures and risk mitigation measures before making their models public.

They would also need to ensure that the training data used to inform their systems does not infringe copyright law.

“Providers of such AI models need to take measures to assess and mitigate risks to fundamental rights, health and safety and the environment, democracy and the rule of law,” said Sehun Pahalwan, counsel at Linkletters and co-head of the law firm. Will be.” The Telecommunications, Media and Technology and IP Practice Group in Madrid told CNBC.

“They will also be subject to data governance requirements, such as checking the appropriateness of data sources and potential bias.”

It is important to stress that while the legislation has been passed by MPs in the European Parliament, it is far from becoming law.

Why now?

Privately held companies have been left to develop AI technology at breakneck speed, giving rise to systems like MicrosoftSupported OpenAI’s chatgpt and Google’s Bard.

Google announced several new AI updates on Wednesday, including an advanced language model called PaLM 2, which the company says outperforms other leading systems on certain tasks.

Novel AI chatbots such as ChatGPT have enthralled many technologists and academics with their ability to produce human responses to user prompts, powered by large language models trained on massive amounts of data.

But AI technology has been around for years and is integrated into more applications and systems than you might imagine. For example, it determines which viral videos or food photos you see on your TikTok or Instagram feed.

The EU proposals aim to provide some rules of the road for AI companies and organizations using AI.

tech industry response

The rules have raised concerns in the tech industry.

The Computer and Communications Industry Association said it was concerned that the AI ​​Act’s scope had become too broad and could capture forms of AI that are harmless.

“It is worrying to see that broad categories of useful AI applications – which pose very limited risks, or none at all – will now face stricter requirements, or even be banned in Europe,” said Boniface de Champrice, policy manager for CCIA Europe. Can.” told CNBC via email.

“The European Commission’s original proposal for the AI ​​Act takes a risk-based approach, regulating specific AI systems that pose a clear risk,” said De Champis.

“MEPs have now introduced all sorts of amendments that change the nature of the AI ​​Act, which now recognizes that very broad categories of AI are inherently dangerous.”

what the experts are saying

Desi Savova, head of continental Europe for the technology group at law firm Clifford Chance, said the EU rules would set a “global standard” for AI regulation. However, he noted that other jurisdictions, including China, the US and the UK, are rapidly developing their own responses.

“The long-range reach of the proposed AI rules naturally means that AI players in all corners of the world need to care,” Savova told CNBC via email.

“The right question is whether the AI ​​Act will set the only standard for AI. China, the US and the UK to name a few are defining their AI policy and regulatory approaches. Certainly they will be watching all AI Act negotiations closely. formulating one’s own viewpoints.”

Savova said the latest AI Act draft from parliament will put into law many ethical AI principles organizations are pushing for.

Sarah Chander, senior policy advisor at European Digital Rights, a Brussels-based digital rights campaign group, said the laws would need a basic model like ChatGPT to “pass through testing, documentation and transparency requirements”.

“While these transparency requirements will not address the infrastructure and economic concerns with the development of these massive AI systems, it does require technology companies to disclose the amount of computing power required,” Chander told CNBC.

“There are currently several initiatives to regulate generative AI around the world, such as in China and the US,” Pahlwan said.

“However, the EU AI Act is likely to play an important role in the development of such legislative initiatives around the world and propel the EU to again become a standard-setter on the international scene, similarly to the general What happened with respect to data protection regulation.”

Source link