The European Union unveiled strict regulations on Wednesday, April 21st to govern the use of artificial intelligence, a first-of-its-kind policy that outlines how companies and governments can use a technology seen as one of the most significant, but ethically fraught, scientific breakthroughs in recent memory.
The draft rules would set limits around the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, bank lending, school enrollment selections and the scoring of exams. It would also cover the use of artificial intelligence by law enforcement and court systems — areas considered “high risk” because they could threaten people’s safety or fundamental rights.
Some uses would be banned altogether, including live facial recognition in public spaces, though there would be several exemptions for national security and other purposes.
The 108-page policy is an attempt to regulate an emerging technology before it becomes mainstream. The rules have far-reaching implications for major technology companies that have poured resources into developing artificial intelligence, including Amazon, Google, Facebook and Microsoft, but also scores of other companies that use the software to develop medicine, underwrite insurance policies and judge creditworthiness. Governments have used versions of the technology in criminal justice and the allocation of public services like income support.
Companies that violate the new regulations, which could take several years to move through the European Union policymaking process, could face fines of up to 6 percent of global sales.
“On artificial intelligence, trust is a must, not a nice-to-have,” Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. “With these landmark rules, the E.U. is spearheading the development of new global norms to make sure A.I. can be trusted.”
The European Union regulations would require companies providing artificial intelligence in high-risk areas to provide regulators with proof of its safety, including risk assessments and documentation explaining how the technology is making decisions. The companies must also guarantee human oversight in how the systems are created and used.
Some applications, like chatbots that provide human-like conversation in customer service situations, and software that creates hard-to-detect manipulated images like “deepfakes,” would have to make clear to users that what they were seeing was computer generated.
For years, the European Union has been the world’s most aggressive watchdog of the technology industry, with other nations often using its policies as blueprints. The bloc has already enacted the world’s most far-reaching data privacy regulations and is debating additional antitrust and content-moderation laws.
You can read the 108-page report in its entirety here
The post "Europe Proposes Strict Rules for Artificial Intelligence" was written by Adam Satariano for the New York Times.