Screen-Shot-2020-06-24-at-6.34.21-PM

Students organized a peaceful rally June 19, 2020 at Marshall Park to encourage youth to use their voices to fight against racial injustice. People of all ages attended to support Black Lives Matter. CMG file photo 

When asked about what his neighborhood is like, Ash Williams politely declines to answer. 

The adjunct faculty member at UNC Charlotte is a community organizer active in several protest movements, including the trans rights movement, the environmental defense movement and the Black Lives Matter movement.  Williams says he's been targeted and surveilled at his home before.

(The Charlotte-Mecklenburg Police Department declined to confirm or deny whether it had surveilled Williams, citing state law on the privacy of investigations.)

Given what he says is a history of surveillance, then it's no surprise to Williams that the CMPD uses the services of a vendor that has been accused of scrutinizing the social media feeds of activists and protesters.

The police department has an agreement with Dataminr, Inc., totaling $120,000 per year, according to documents received by South Charlotte Weekly through a recent public records request.    

The New York company touts its ability to use artificial intelligence algorithms to scan Twitter and other networks, seeking opportunities for investors, breaking news for media outlets' newsrooms, and dangerous situations for police to assist with.  

But an October 2020 investigation by The Intercept reported that in many cases, Dataminr's police-bound alerts were "nothing more than garden-variety racial profiling, powered not primarily by artificial intelligence but by a small army of human analysts conducting endless keyword searches."  

Earlier in the year, the outlet reported on police organizations using Dataminr to examine protesters' tweets after the death of George Floyd.

The process is often lacking in transparency. The Brennan Center for Justice states that monitoring of social media by police forces puts privacy and free speech at risk, unfairly targets communities of color, and can lead to arrests of people "on the basis of misinterpreted posts and associations." The organization states that many police departments that use social media monitoring tools have been slow to release the policies governing their use, which the center says increases "the danger of misuse and abuse."

Teenagers and tweens, for example, have been placed on lists of potential gang members, based solely on posturing they've done on Twitter, sometimes leading to arrests and extended jailings of young people based on flimsy evidence that can interrupt schooling and often doesn't hold up in court, according to anti-carceral publication The Appeal.

Asked whether there was a minimum age below which a Twitter user would not be flagged as a potential gang member, a spokesman from a public relations company representing Dataminr ignored several emails, sent more than a week before the filing of this story.

The Intercept's investigation found claims by Dataminr employees that keyword searches were used to target specific neighborhoods and even specific housing developments – largely populated by people of color – for surveillance, while ignoring white areas.

One employee told the outlet that the south side of Chicago was targeted for crime alerts, while employees were told to focus on entertainment alerts when it came to content from the city's whiter northern neighborhoods. The email thread between South Charlotte Weekly and Dataminr's PR representative also included a question about the racial composition of Dataminr's staff, which went unanswered.

Adventures abroad

Dataminr's interest in providing intelligence for profit isn't limited to domestic institutions. In 2017, the tech publication The Verge reported that a former staffer for Hillary Clinton, acting as a consultant for the company, reached out to another former State Department official about setting up meetings between the company and foreign governments.  

"Azerbaijan would be AWESOME," the consultant wrote in an email, in reference to a potential deal with a country accused of jailing journalists and activists. Providing information to the monarchy of Saudi Arabia, however, was a bridge too far, as company members "[weren't] sure what the Saudis would do with it."  

The Saudi monarchy has recently received criticism for the role of its security forces in the killing and dismembering of Washington Post journalist Jamal Khashoggi.

Closer to home, a top privacy expert says that technologies like Dataminr and other methods of social media surveillance certainly create a risk of self-censorship in the U.S. for activists, people working in civil liberties, nonprofits or "other interested parties" like journalists and academics who tweet about issues of race and criminal justice. Matthew Guariglia of the Electronic Frontier Foundation, which focuses on issues of digital privacy, told South Charlotte Weekly in an email that the recent FBI arrest of a BLM-aligned man for tweets showing a "path to radicalization" sets a dangerous precedent.

"'Being on a path' and 'radicalization' are both subjective terms that are given more credence if the people sitting behind the computer choose to believe that some political principals are more violent than others," Guariglia said.  

"This is easily skewed by the politics within police departments and re-iterates the threat of reprisal and retribution surveillance creates when used against activists seeking to change policing in the U.S."

The FBI was an early client of Dataminr, and the CIA was an early investor in the company.  

While national law enforcement agencies have recently made the case for expanded powers by citing the Jan. 6 invasion of the nation's capitol by violent Trump supporters, past actions suggest that those powers are likely to be used against other groups as well.

For example, the FBI has in the past put significant resources into surveilling and infiltrating ecological and animal rights groups, according to The Intercept, at the expense of investigations into religious terrorist groups and violent white supremacist organizations.

Williams, the Charlotte activist, is worried that arresting someone merely for showing tendencies in social media posts will normalize more arrests of Black people under similar circumstances.  

"Certain people will always be more negatively impacted by such technologies," Williams said. "Those people are Black and brown people, poor people, immigrant people, disabled people, trans people, and people who are otherwise deemed as 'others'."

One tool in a growing arsenal

The use of Dataminr by police in Charlotte does not come in a vacuum.  

Recent decades have seen the CMPD adding significantly to its surveillance capabilities.  

According to the Atlas of Surveillance project, a crowdsourced site that tracks police technology changes run by the Electronic Frontier Foundation, the CMPD first acquired a Stingray cell-site simulator from Harris Corp. in 2006, then upgraded the technology in 2012.  

Cell site simulators are portable devices that mimic cellular towers, forcing phones to connect to them, so that police can intercept and download data. Police departments across the country have been extremely secretive about their use of cell site simulators, going so far as to drop key evidence in order to avoid explaining the use of the technology in court, reported Ars Technica

According to the Atlas of Surveillance, the CMPD also has over 100 automated license plate readers and signed an agreement with Amazon's home surveillance equipment company, Ring, in 2019 to gain special access to the company's Neighbors app.

And the department runs a "real-time crime center" that has been active since 2013, with access to over 1,000 cameras across the city.

All of that, in combination with existing racial disparities in policing and the addition of tools like Dataminr, can add up to an environment that some argue is ripe for abuse – what the Rev. Kojo Nantambu called “a rash of reports” of police harassment after the killing of Jonathan Ferrell.

Nantambu, an NAACP leader, talking to Al Jazeera in 2013, described allegations of such practices as police following Black teenagers home while they walked through their neighborhoods.

“The parents were so intimidated that they didn’t want to come out and meet so we could talk about it and get an investigation,” Nantambu told the outlet at the time.

Asked when Dataminr first approached Charlotte about a contract, a city representative said via email that it likely happened in 2019.

"We believe that Dataminr first came up in the Fall of 2019 in meetings with the FBI in preparation for [the Republican National Convention]," the representative said. "We requested a presentation which occurred in early 2020." 

A yearly contract was signed in June.

In a statement that accompanied the public documents released for this article, the city emphasized that Dataminr looks only at publicly available information sources when compiling its alerts.

"This software provides detection, distillation, and correlation of information on specific events across multiple publicly available information platforms in real time, ensuring continued situational awareness and a comprehensive view of events looking only at publicly available tweets" the release said.

A representative for Dataminr went on record to The Intercept with the statement that "97% of our alerts are generated purely by AI without any human involvement." The representative was not, however, willing to tell the outlet in October what percentage of police bound alerts – as opposed to investment alerts and alerts to news organizations – were algorithmically generated.

South Charlotte Weekly's March attempts to get clarification on the matter were also ignored for more than a week.

Guariglia, the privacy advocate, says that even if Dataminr's alerts to the police are algorithmically generated, they can still include the biases of the programmers in their content.

"Algorithms are generated by human beings and create a system of surveillance and data analysis that have the same biases as the humans that designed it – it just happened to be automated," Guariglia said.  

"An automated system of evaluating threats, like having people who don’t understand the culture or context of the people they are surveilling, is likely to make the same mistakes."

Patrick Maynard's staff or freelance reporting has been featured in nearly a dozen publications, including VICE, Sludge, the Detroit Metro Times and the Baltimore Sun. Follow him on Twitter: @patrickmaynard.

(0) comments

Welcome to the discussion.

Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.