The European Union has proposed new proposals to regulate artificial intelligence

The EU has been at the forefront of legislation on data protection and regulation of artificial intelligence technology. The European Union on Tuesday unveiled a legal framework to regulate the use of artificial intelligence, according to the Wall Street Journal. The framework includes a set of regulations outlining how companies and governments should use A.I. technology, including restrictions on police use of facial recognition software in public places and a ban on certain types of A.I. systems, in one of the broadest moves on the subject to date.


According to the draft, the EU will impose a total ban on social credit systems that mass monitor and use AI technology, while placing strict restrictions on “high-risk” applications in specific areas.




As early as February 19, 2020, The European Commission has published A White Paper entitled “White Paper on Artificial Intelligence – A European Approach to Excellence and Trust” How to promote the development of artificial intelligence and solve the related risks are discussed. The draft, published on 21 April, is a legal extension of the White Paper, which aims to provide a solid legal framework for a trusted AI ecosystem.

The legal framework announced by the European Commission will limit the use of artificial intelligence in a number of activities, including self-driving cars, hiring decisions, bank loans, university admissions decisions and test results. It will also cover restrictions on the use of artificial intelligence systems by law enforcement agencies and courts. These areas are considered “high risk” by the EU because they may pose a threat to people’s safety or fundamental rights.

In addition, some AI applications, including real-time facial recognition in public places, will be banned altogether.




But there could be exemptions for national security or other purposes, such as preventing terrorist attacks, searching for missing children or addressing other public safety emergencies.

The regulation will require companies in high-risk areas that use technology-engineered artificial intelligence to provide regulators with evidence of their safety, including risk assessments and documents explaining how the technology makes decisions. These companies must also ensure that their creation and use of the system is subject to manual monitoring.

The 108-page regulation, if passed, will reportedly have a profound impact on major tech companies such as Amazon, Google, Facebook and Microsoft that have poured resources into developing artificial intelligence, as well as dozens of others that use AI software to develop drugs, underwrite insurance and judge credit




For serious violations, regulators can fine companies as much as 6 percent of their global annual revenue, the report said, although in practice, EU officials rarely impose maximum fines.

The bill is one of the most extensive of its kind proposed by a Western government and is part of the EU’s efforts to expand its role as a global science and technology enforcer, the report said.



In the European Union, such laws would have to be approved by the European Council, which represents the bloc’s 27 national governments, and by the directly elected European Parliament, a process that could take years, the report added.

Some digital rights activists welcomed some of the rules, but noted that others seemed too vague and too many loopholes. Some in the industry argue that EU rules will benefit Chinese and American companies that do not have to deal with similar rules.

The source of the article quoted the Wall Street Journal