On Facial Recognition, AI and Regulations
Last week, Microsoft President Brad Smith, in his blog post “Facial recognition: It’s time for action” said:
“We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.
"Governments and the tech sector both play a vital role in ensuring that facial recognition technology creates broad societal benefits while curbing the risk of abuse. While many of the issues are becoming increasingly clear, the technology is young.
"To protect against the use of facial recognition to encroach on democratic freedoms, legislation should permit law enforcement agencies to use facial recognition to engage in ongoing surveillance of specified individuals in public spaces only when: a court order has been obtained to permit the use of facial recognition services for this monitoring; or where there is an emergency involving imminent danger or risk of death or serious physical injury to a person."
In the report “AI Now Report 2018” (Dec. 2018), researchers from Google, Microsoft and New York University said:
“This year, we have seen AI amplify large-scale surveillance through techniques that analyze video, audio, images, and social media content across entire populations and identify and target individuals and groups.
"While researchers and advocates have long warned about the dangers of mass data collection and surveillance, AI raises the stakes in three areas: automation, scale of analysis, and predictive capacity. Specifically, AI systems allow automation of surveillance capabilities far beyond the limits of human review and hand-coded analytics. Thus, they can serve to further centralize these capabilities in the hands of a small number of actors. These systems also exponentially scale analysis and tracking across large quantities of data, attempting to make connections and inferences that would have been difficult or impossible before their introduction."
In the document “Privacy Impact Assessment for the Facial Recognition Pilot”, the U.S. Department of Homeland Security (Nov. 26, 2018) said:
“U.S. Secret Service will operate a Facial Recognition Pilot ... The collection of volunteer subject data will assist in testing the ability of facial recognition technology to identify known individuals and to determine if biometric technology can be incorporated into the continuously evolving security plan at the White House Complex."
For U.S. policy makers to address facial recognition issues will require 1) developing a better understanding of the technology, 2) assessing the priority of this topic for their constituents and political donors and 3) clarifying what “appropriate use” is for law enforcement groups, as well as private and public sector participants.
The European Union will continue to increase privacy regulation (for facial recognition, AI, data, etc.). The U.S. will debate economic/privacy trade-offs and likely take action only after “significant harm” occurs. China will drive technology adoption based on its value to the society/state in business, consumer and governmental use cases.
Facial recognition, AI and other technologies (social media, genetic engineering, Internet of Things, etc,) will drive significant innovation. Managing their associated risks requires oversight by everyone (developers, users, policy makers, etc.).