Recently, the United States San Francisco passed a number of amendments to the “Stop Secret surveillance” regulations, , The regulations emphasize: “Face recognition technology is far more likely to infringe on civil rights and civil liberties than it claims; this technology will exacerbate racial injustice and threaten the ability of our lives not to be continuously monitored by the government.”
That means San Francisco will become the first city in U.S. history to disable face recognition software.
Face recognition is a relatively new and widely used technology in today’s various artificial intelligence technologies, which is of great significance to individual biometrics, security, criminal identification and social life.Judging from the current application, the technology has been increasingly used in shops, businesses, schools and other places. But in the United States, the most avid paying users of facial recognition technology are government and law enforcement agencies.
Face recognition software has been stopped by a city, at least interpreted as the corresponding rules and techniques for its management are immature at a time when AI technology is developing at a rapid pace.
San Francisco The biggest reason for banning face recognition is that it places citizens under constant government surveillance and violates civil rights and freedoms, while also causing racial discrimination.
These are part of AI ethics, and human society cannot develop technology for the use of technology, let alone violate or even undermine the recognized ethical bottom line of human society on the grounds of development and use of technology.
The controversy over the technology has also raged in the United States, from ordinary citizens to federal and state lawmakers, calling for a cooling of face recognition technology and early legislation to regulate it.
In fact, many people have such concerns about AI technology, but technology companies seem to be in danger of getting out of control based on their feverish pursuit of AI technology.
To this end, the European Union urgently published AI ethical guidelines in early April, requiring companies and government agencies to follow 7 principles when developing AI in the future. The first of these is that AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems and should be able to intervene or monitor every decision made by AI software. San Francisco’s important reasons for banning the use of face recognition technology are consistent with this guideline.
Face recognition, like other AI technologies, remains problematic.
First, face recognition technology is not reliable. Face recognition technology from amazon and others misidentified African americans and even misidentified 28 members of congress as suspects. Joey bramvini’s team at the us-based Georgia institute of technology analysed 3,500 photos of people with different skin tones，It was found that the accuracy of autonomous driving techniques was reduced by an average of 5% when identifying people with black skin. Microsoft, IBM and Facebook, among other big companies, have introduced commercial face-recognition software that also fails to recognize women and coloured race.
Face recognition technology could also be mimicked and used for faking. A new technology called GAN (generating a confrontation network) can replace anyone’s face with another person’s and pass the test. In fact, GAN is a kind of generating model, which can automatically generate images and tamper with images. It can easily detect fraud and fraud through face recognition technology.
Statistics show that the Internet has a large number of synthetic information, such as synthetic sound, the generation of images, AI synthesis does not exist portraits, etc., so that face recognition in the not yet widely used, reliability encountered an unprecedented crisis.
If facial recognition is not restricted and regulated, it threatens privacy and freedom far more than it might think.
Facial recognition is mainly dependent on human facial information, which is a unique feature of biological information, similar information including fingerprints, Iris, once leaked will be a lifetime leak.
Other passwords to protect individuals are replaceable compared to personal biological information, but once the personal biological information is leaked and changed, it will be a lifetime leak, in addition to affecting people’s lives, but also let people in the eternal insecurity.
The unreliability of face recognition also lies in the fact that its insecurity is long-lasting. The EU’s 7 ethical guidelines on AI require that the personal data collected by AI systems should be secure, protect personal privacy, should not be accessed by anyone, and should not be easily stolen.
However, the current face recognition technology is mostly owned by companies and governments, which belongs to proprietary technology and lacks the strict supervision of a fair and impartial third party.
In this sense, San Francisco disables AI face recognition technology, which is legislated after balancing the pros and cons of AI technology. Now, the U.S. Senate is considering a bill to regulate the commercial use of face recognition software, which also ensures the safe and fair use of AI technology from a management perspective.