New Yorkers in high stop-and-frisk areas subject to more facial recognition tech

New Yorkers who live in areas where controversial stop-and-frisk searches happen most frequently are also more likely to be surveilled by facial recognition technology, according to research by Amnesty International and other researchers.

Research also showed that in the Brooklyn, Bronx and Queens boroughs of the city there was a direct correlation between the proportion of non-white residents and the concentration of controversial facial recognition technology.

“Our analysis shows that the NYPD’s use of facial recognition technology helps to reinforce discriminatory policing against minority communities in New York City,” said Matt Mahmoudi, artificial intelligence and human rights researcher at Amnesty International.

The research is a part of the global anti-facial recognition technology campaign, Ban the Scan, investigating increasing use of surveillance initiatives in the New York police department (NYPD).

Using thousands of digital volunteers through the Decode NYC Surveillance project, more than 25,500 CCTV cameras were mapped across New York City. Data scientists and researchers from Amnesty International compared the data on the camera placement with statistics on police stop-and-frisk.

“We have long known that stop-and-frisk in New York is a racist policing tactic. We now know that the communities most targeted with stop-and-frisk are also at greater risk of discriminatory policing through invasive surveillance,” said Mahmoudi.

While the NYPD has been using facial recognition technology for more than a decade, its use has not been without controversy.

According to Politico, the NYPD has faced at least six lawsuits over its facial recognition technology use. Additionally, in June 2020, the New York city council mandated that the NYPD publicly disclose information on its surveillance efforts.

Despite local and national pushback to facial recognition technology, its use has been fully supported by New York’s mayor, Eric Adams, as a tool to investigate crimes.

“If you’re on Facebook, Instagram, Twitter – no matter what, they can see and identify who you are without violating the rights of people,” said Adams in January while discussing a new plan to solve gun violence in New York City. “It’s going to be used for investigatory purposes.”

Previously, the use of facial recognition technology has led to false arrests, all of Black men.

In 2019, Michael Oliver, a 25-year-old Black man from Detroit, was wrongly identified by facial recognition technology and arrested for grabbing a teacher’s cellphone and damaging it as the teacher was recording a fight among students.

Oliver was also wrongly identified by the teacher in a photo lineup.

In 2020, Robert Julian-Borchak Williams was falsely arrested by Detroit police after facial recognition technology incorrectly identified him as a shoplifting suspect.

In the same year, Nijeer Parks, 33, spent 10 days in jail and $5,000 to defend himself after being falsely accused of stealing candy and trying to hit a police officer with a car in Woodbridge, New Jersey.

Parks, who was wrongly identified by facial recognition technology, was 30 miles away when the incident took place. He is now suing the police department, the city of Woodbridge and the prosecutor on his case, according to the New York Times.

Comments are closed.