Exclusive: LAPD partnered with tech firm that enables secretive online spying

The Los Angeles police department pursued a contract with a controversial technology company that could enable police to use fake social media accounts to surveil civilians and claimed its algorithms can identify people who may commit crimes in the future.

A cache of internal LAPD documents obtained through public records requests by the Brennan Center for Justice, a non-profit organization, and shared with the Guardian, reveal that LAPD in 2019 trialed social media surveillance software from the analytics company Voyager Labs.

Like many companies in this industry, Voyager Labs’ software allows law enforcement to collect and analyze large troves of social media data to investigate crimes or monitor potential threats.

But documents reveal the company takes this surveillance a step further. In its sales pitch to LAPD about a potential long-term contract, Voyager said its software could collect data on a suspect’s online network and surveil the accounts of thousands of the suspect’s “friends”. It said its artificial intelligence could discern people’s motives and beliefs and identify social media users who are most “engaged in their hearts” about their ideologies. And it suggested its tools could allow agencies to conduct undercover monitoring using fake social media profiles.

LAPD’s trial with Voyager ended in November 2019. The records show the department continued to access some of the technology after the pilot period, and that LAPD and Voyager spent more than a year trying to finalize a formal contract. The documents show that LAPD has had ongoing conversations this year about a continued partnership, but a police spokesperson told the Guardian on Monday that the department was not currently using Voyager.

LAPD declined to respond to detailed and repeated inquiries on its trial with Voyager and its conversations about a potential long-term contract, as well as questions about its use of social media surveillance software.

The department has said in the past that social media can be critical for investigations and for “situational awareness” in monitoring major events for potential public safety issues. The city has seen large demonstrations in recent years, as well as clashes between activists over issues such as vaccination requirements.

But experts who reviewed the documents for the Guardian say they raise concerns about LAPD’s pursuit of ethically questionable software. The department’s surveillance technology could be violating civilians’ free speech and privacy rights, the experts say, while facilitating racial profiling.

The full scope of LAPD’s surveillance tech is unclear, though records suggest that the department has in recent years purchased or considered buying software from at least 10 companies that monitor social media. The department is often a trailblazer among US police departments in adopting new technologies, with a large police budget and private foundation funding that allows it to trial programs later adopted by other departments.

The concerns come after the Guardian recently revealed that LAPD has been directing officers to broadly collect social media information of civilians they stop and question, including people who are not cited or arrested, and amid growing scrutiny of the department’s surveillance and “predictive policing” practices.

Voyager – registered as Bionic 8 Analytics – gave LAPD some of its products on a trial basis in the summer and fall of 2019, the records show.

The documents don’t make clear what suite of tools LAPD had access to during the trial or whether the department used some of the company’s more controversial features. But a report the company produced for LAPD during this period says the department used the company’s software to investigate more than 500 social media profiles and to analyze thousands of messages. The redacted report said LAPD had used the software for “real-time tactical intelligence”; “protective intelligence” for “VIPs” in local government and in LAPD; and cases related to gangs, homicides and hate groups. An unnamed LAPD investigator was quoted in the report as saying Voyager helped the department “identify a few new targets”.

In internal messages about the pilot in 2019, LAPD said Voyager was especially helpful in analyzing social media data obtained through warrants and in investigating online networks of “street gangs”.

Communications between Voyager and LAPD after the trial ended and when the company was trying to sell the department on its products reveal more about the firm’s purported capabilities, claims experts said were bold and troubling.

In the spring of 2020, while pitching a contract, Voyager provided LAPD with case studies illustrating how the software had been used.

In one example, the company said its software had been used to investigate a Muslim Brotherhood activist in New York City who allegedly made a video encouraging people to intentionally spread Covid to Egyptian government officials in March 2020.

A Voyager representative told LAPD the investigation was conducted for “federal and local agencies” but did not name the clients or specify whether the threat had turned out to be legitimate. But Voyager said its software was able to collect and analyze thousands of the activist’s social media posts and had scooped up data on 4,000 of his “friends”.

Voyager also said its software was able to discern which social media users caught up in the search were “top connections” of the activist and that it could determine who was based in New York and who worked for a government agency. The company claimed the software could also discern which of the accounts showed an “affinity” for “violent, radical ideologies” based on “indirect connections” to “extremist accounts”, appearing to refer to friends of friends.

In another presentation, Voyager suggested its software could not only collect large amounts of social media data but that its “artificial intelligence” could discern people’s beliefs.

Voyager showed LAPD how its software could have been used to investigate an alleged terrorist attack, analyzing the case of Adam Alsahli – a man killed after he opened fire at the Corpus Christi naval base in May 2020. Pointing to the man’s social media activity, Voyager claimed its AI could ascertain people’s “affinity for Islamic fundamentalism or extremism”. The company cited the shooter’s “pictures with Islamic themes” and said his Instagram accounts showed “his pride in and identification with his Arab heritage”. The company said its AI was so effective that its results, produced in minutes, did not “require any intervention or assessments by an analyst or investigator”.

In an October 2020 proposal document, Voyager also said its software could conduct a “sentiment analysis” to discern who was most emotionally invested and had the “passion needed to act on their beliefs”.

Voyager’s monitoring of broad online networks, and its claims about AI, raised red flags for experts.

“There’s a basic ‘guilt by association’ that Voyager seems to really endorse,” said Rachel Levinson-Waldman, a deputy director at the Brennan Center, about Voyager’s New York City case study. “This notion that you can be painted with the ideology of people that you’re not even directly connected to is really disturbing.”

The naval base shooting example was deeply troubling, said Meredith Broussard, a New York University data journalism professor and expert on AI, who reviewed the records for the Guardian. “Just because you have an affinity for Islam does not mean you’re a criminal or a terrorist. That is insulting and racist. It’s religious bigotry embedded in code.”

“This is hyperbolic AI marketing. The more they brag, the less I believe them,” said Cathy O’Neil, a data scientist and algorithmic auditor, arguing that the firm’s broad claims were not based in legitimate science and were unachievable: “They’re saying, ‘We can see if somebody has criminal intent.’ No, you can’t. Even people who commit crimes can’t always tell they have criminal intent.”

The consequences of this pseudoscience can be dire, she added: “Claims of accuracy don’t have to actually be true for the algorithm to be used as a weapon.”

The documents show Voyager and LAPD officers also discussed some of the company’s most controversial proposals. In an October 2020 letter to LAPD outlining details of a potential contract, Voyager claimed its social media monitoring was “traceless”, saying that the social media companies themselves would not be able to tell that LAPD was behind the surveillance.

In an earlier report to LAPD in 2019, Voyager said it was developing software to spy on WhatsApp groups using an “active persona mechanism”, or “avatar”, suggesting that police would create a fake account to collect information from a group.

In one September 2019 email to a Voyager sales representative, an LAPD technology official said the feature that allows police to “log in with fake accounts that are already friended with the target subject” was a “great function”, but added that the department was not heading in the direction of using that service.

It’s unclear if LAPD ever used the fake account feature. In another September 2019 email, an LAPD official in the robbery and homicide division told Voyager that the “avatars” function was a “need-to-have” feature. And Voyager said in one document that some LAPD staffers piloting its services had requested the “active persona” feature for Facebook, Instagram and Telegram.

This feature could violate the policies of Facebook, which prohibits fake accounts and has previously deactivated users that it determined were police officers impersonating civilians. A Facebook spokesperson said members of law enforcement, like all users, were required to use their real names on their profiles.

“As stated in our terms of services, misrepresentations and impersonations are not allowed on our services and we take action when we find violating activity,” a Facebook spokesperson, Sally Aldous, said in a statement.

Using fake accounts to monitor activists online was equivalent to undercover spying, civil rights advocates said.

LAPD has policies for “online undercover activity” that establishes some restrictions for this tactic, including requiring special approval from a supervisor if police are using a fake account to communicate with someone, but there is less oversight if an account is created to examine “trends” or for “conducting research”.

John Hamasaki, a criminal defense lawyer and member of the San Francisco police commission, said some police departments were updating policies to restrict the use of fake accounts in an effort to protect free speech. In San Francisco, he said, police would be barred from using a company like Voyager for broad surveillance of online networks. The type of predictive policing software that Voyager advertises is also strictly prohibited in Oakland, according to the city’s privacy commission.

“The problem with these types of surveillance operations is they’re often not based on reasonable suspicion or probable cause,” he said. “Instead, it is casting a broad net.”

Levinson-Waldman of the Brennan Center said it was unclear how widespread this kind of surveillance was in police departments across the US. She noted that while law enforcement departments were increasingly relying on social media in investigations, there was often little transparency.

LAPD and the New York police department have two of the largest police budgets in the country and have a long history of piloting cutting-edge technology that ends up being ineffective or harmful, said Broussard, the author of Artificial Unintelligence: How Computers Misunderstand the World.

Even when LAPD or NYPD cease using certain products, the companies end up bringing their tech elsewhere, she said: “The companies still want to sell software, so they go after smaller police forces that have even less capacity to evaluate the efficacy of these snake-oil software systems.”

Recent reporting has shown how LAPD has used surveillance technology similar to Voyager’s to monitor Black Lives Matter organizing, and the department also recently said it was pursuing this kind of technology for “information gathering” in a report about reforms since the George Floyd protests. LAPD did not respond to questions from the Guardian about whether Voyager was used for monitoring protesters.

In a report in September of this year, the department said it was “currently using” Voyager software and seeking $450,000 to purchase additional Voyager technology. But an LAPD spokesperson said this week that the department was not using the company’s software at the moment. She did not respond to questions about when LAPD ceased using the services and if the department was still pursuing a partnership.

Voyager declined to comment on its work with LAPD and did not answer specific questions about its services. A spokesperson, Lital Carter Rosenne, said its clients were responsible for building databases and running the software, adding: “As a company, we follow the laws of all the countries in which we do business. We also have confidence that those with whom we do business are law-abiding public and private organizations.”

LA activists said the revelations raised serious concerns about how the tech could be used against groups that protest LAPD. “I’m really astounded that not only is LAPD using these companies, but that there are these tactics which feel very much like digital infiltration,” said Dr Melina Abdullah, a co-founder of Black Lives Matter LA. “It demonstrates that our fears are true.”

Abdullah, who had not heard of Voyager Labs, said she was particularly disturbed to learn about potential monitoring of WhatsApp groups: “We know that our public posts are monitored. But when they’re engaging in additional digging into private posts, that is supposed to be a more secure way of communicating.”

Comments are closed.