Our Shared Responsibility: We Need Public Oversight of Surveillance Tech

There’s been an interesting debate brewing at the University of Miami: did campus police use facial recognition to identify students who protested the university’s reopening plan, which many students found inadequate for protecting their health? Regardless of what you believe, it’s worth noting the fact that this surveillance is not only possible but permissible. Surveillance can have a chilling effect, from students in Miami to protesters in Minneapolis, and a lack of surveillance oversight further underscores the threats to privacy and other civil liberties. Oversight of facial recognition tech – and of surveillance more broadly – is our shared responsibility to demand transparency. As long as we leave technology oversight solely to technologists, comprehensive accountability measures will remain out of reach.

Facial recognition is a form of artificial intelligence (AI) in which computer models learn to detect patterns based on huge datasets of photos. The computer works to identify individuals based on photos, presenting itself as a great tool for solving crimes, in theory: instead of invasive searches, facial recognition would provide a problem-free and convenient form of screening large crowds, right?

The answer’s a resounding no.

Facial recognition practices are rife with problems. First, the technology is a prime example of human programmers’ biases being magnified and literally encoded into algorithmic outcomes of facial recognition. Among Silicon Valley software developers, being white and male puts you in the majority. Now, look to the accuracy of facial recognition across demographic factors: while the error rate for light-skinned men was only 0.8%, the error rate for dark-skinned women was 34.7%.

In addition to these appalling disparities in performance, facial recognition poses significant threats to personal privacy, removing rightfully expected senses of anonymity in public spaces. The mass surveillance of pedestrians on a city street fails to meet the principle of necessity and proportionality promulgated by the United Nations.

Academic research and human rights principles tell us that facial recognition technology is inaccurate and invasive. Put in the context of usage by law enforcement, where misidentification can mean a false arrest, the dangers come into even starker relief. These real concerns should be enough to stave off companies from further developing this problematic technology for mass surveillance, right?

It’s stopping some companies, but not all – and that’s not enough. Companies like IBM have recently exited the facial recognition market based on the tool’s threats to human rights and exacerbated racial disparities. Others continue selling their facial recognition tools to controversial agencies like Immigration and Customs Enforcement (ICE) and Customs and Border Patrol (CBP).

The IBM CEO states that “vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularity [sic] when used in law enforcement, and that such bias testing is audited and reported.” I agree! We all have a shared responsibility. But I would hesitate to feel content just with a corporate executive claiming this.

Sure, companies like IBM could set an example for the field, but even the most persuasive industry exemplar does not have any enforcement capacity over other companies’ developments of facial recognition. The oversight needs to come from a public body, one untethered from corporate interests and one that represents the diversity of the community impacted by the use – or misuse – of the technology.

We have experienced decades of increasing tech-militarization of American law enforcement agencies. Calls for scaling back the surveillance and military capacities of these agencies are met with efforts to build a federal-level policing commission that, among other deeply troubling intentions, plans to more heavily invest in facial recognition technologies. All this despite the above research and the findings that militarization had no detectable impact on violent crime or officer safety. Our communities are not safer as a result of this tech-militarization. In fact, they are made less safe by the presence of surveillance technology and military equipment that exacerbate racial disparities and are deployed without sufficiently informing the public, let alone letting their voices be heard.

What would community control of surveillance technologies and military equipment look like? It would look like formal analysis of new technologies’ impact prior to their acquisition; it would look like  regular and public reporting of the technologies that law enforcement agencies use in their communities; it would look like means appropriate storage and disposal of the data generated by these technologies.

To anyone who claims that technology oversight should be limited to technology experts, I would ask the following: How many of those expert overseers have been wrongfully targeted by facial recognition software? Community members deserve to be informed and to be heard before surveillance technologies have been implemented, not just in hearing their legitimate grievances in the aftermath.

With this in mind, I am eager to attend this week’s virtual Police Surveillance Town Hall on Oct 22 at 5pm CDT, hosted by Minneapolis City Council-member Steve Fletcher (Ward 3) and the ACLU of Minnesota. This event, open to the public, is an opportunity to hear from experts in tech and policy on this pressing issue involving our shared responsibility and our shared future, for tech that works for all of us.

For more information, check out this primer on facial recognition from the POSTME coalition.


Related Posts