This is part of a BuzzFeed News package on schools and social media surveillance. Read more here.
On July 29, a 13-year-old who used to tweet as @valkyries_queen was watching New Girl on Netflix when the plot really got to her.
“nick if you dont fall in love with jess again im gonna kill you,” she posted from her pseudonymous account, named after the character Valkyrie, portrayed by her crush, Tessa Thompson, in the Marvel movies.
@valkyries_queen had about 220 followers. Her tweet, sent from where she lives in Washington state, didn’t get any replies, any retweets, any attention, really — except from a school district more than 2,000 miles away, in Texas.
The tweet triggered the algorithms powering an automated monitoring system called Social Sentinel, which scans social media for language or images suggesting a potential threat to school safety. The system determines which schools the users may be associated with and notes locations they list in their profiles.
Social Sentinel sent @valkyries_queen’s tweet to at least five officials in the Katy Independent School District, just outside the Houston metropolitan area. @valkyries_queen had moved away from Katy, Texas, where she used to go to school, some three months previously but still followed her school’s Twitter account.
Five of her tweets were flagged to school officials as potential safety threats that July. In one post, she joked about wanting to kill herself; in another, about killing her father because he made a racist joke. In a third, she corrected someone about Valkyrie’s sexual orientation: “If I see one more comment calling Valkyrie a lesbian I'm going to KILL ALL OF YOU. MY GIRL IS BISEXUAL AND IF YOU DONT SHUT UP RN.”
@valkyries_queen told BuzzFeed News she wrote all these tweets in jest. She treats her account, as many teens do, as a kind of online journal. “It’s just the fandom,” she said about the New Girl tweet. “They broke up and I was being upset ... like any teen would about a show.”
And she’s baffled why school administrations think monitoring her tweets will help anything. “I think that they are just wasting time,” she told BuzzFeed News, “when they could work on real solutions. They think that they are helping when they could go to the government that could prevent kids from dying, like supporting gun laws.”
The fact that @valkyries_queen’s tweets got caught in a dragnet is a symptom of a particularly American problem. No other country has a similar epidemic of school shootings. School administrators are under pressure to try anything to prevent the next shooter from murdering their students. And because mass shooters sometimes post warnings, trolling messages, racist screeds, and manifestos before they open fire, schools are increasingly turning to automated digital surveillance.
Some districts have purchased software that monitors schools’ own email systems, chats, laptops, and cloud drives for warning signs that students are in danger of harming themselves or others. Social Sentinel is the leading company offering schools a more expansive solution: The firm says it casts a wide net across public posts on social media looking for threats from adults as well as students.
“A threat can come from anyone, anywhere, any time. Let us be your early warning system,” reads the cover of a booklet sent to potential customers.
But automated algorithms easily confuse a joke or slang with a potential shooter’s threat. And there are implications for students whose posts get flagged, leading administrators to possibly start asking questions like: Why did you say you wanted to “kill” your dad?
The founder and CEO of Social Sentinel, based in Burlington, Vermont, is Gary Margolis, a former University of Vermont chief of police. The company, founded in January 2014, has raised at least $12 million in venture capital so far, according to PitchBook, which tracks startup investments.
“Our children are sharing all kinds of things digitally, and included in that are potential acts of harm,” Margolis told BuzzFeed News. “We recognized that in the absence of a company like us to figure that out, there are a whole bunch of people and children who communicate nearly 100% with these mediums.”
He added, “We set out to build a tool that is respectful of individual rights. Without us, the folks who are responsible for safety and security in a school [are] missing an important piece of the puzzle.”
A growing number of school districts have signed up — at least 130 have spent a total of more than $2.5 million on Social Sentinel since August 2014, according to a BuzzFeed News analysis of the GovSpend database. The biggest spike in orders came in the summer of 2018, following shootings in the preceding months at Marjory Stoneman Douglas High School in Parkland, Florida, and Santa Fe High School in Texas.
To get a better idea of what schools are getting for their money, BuzzFeed News submitted public records requests to more than 40 school districts, asking for the alerts they received from Social Sentinel.
Many of them denied our requests, but we received more than 1,800 alerts sent to administrators from eight school districts between May 2017 and September 2019 for which we were able to read the text or view the images in posts that were flagged. For more than 1,200 alerts, we could see which social media platform the flagged posts had appeared on, and for 151 since August, we obtained emails with links to Social Sentinel webpages that revealed why the posts had been associated with the schools.
Our analysis of these alerts suggests that while Social Sentinel may provide schools with nuggets of useful intelligence, they have come amid a tsunami of false positives — posts that were picked as potential threats but simply weren’t. And the system seems largely ignorant to social media platforms other than Twitter, missing potential threats posted on other platforms popular with teens.
Social Sentinel relies on algorithms to detect language or images that suggest a threat of violence or self-harm.
“It’s part of a total strategy, and not intended to be the only tool needed to prevent violence or identify threats — nor is it marketed that way,” Social Sentinel spokesperson Alison Miley said in written answers to questions from BuzzFeed News. “Further, we don’t focus solely on potential or active shooters; we more broadly are looking at mental health issues, wellness, and more. On numerous occasions we have identified situations that required attention.”
Miley said that the company has tweaked its algorithms to reduce the number of false positives and trained clients on what to expect when they sign up. “In this past year alone, we have advanced our algorithms significantly,” she said.
BuzzFeed News found that posts that were flagged by Social Sentinel were much more likely to contain words including “kill,” “school,” “murder,” and “shooting” than posts from the same users that didn’t trigger an alert.
But it’s hard to train algorithms to understand hyperbole, irony, slang, and context. Social Sentinel has seemed to struggle to tell the difference between people tweeting about a meal being “bomb” and an explosive device, and to recognize that shooting in basketball isn’t a safety concern.
In August, Leah Breault, a 21-year-old college student in Winter Park, Florida, tweeted an inside joke that was flagged to school administrators in Seminole County, Florida:
Breault told BuzzFeed News she wasn’t surprised that an algorithm didn’t get her peer group’s dark sense of humor. “School shootings, climate change, the political madness right now … we just take the amount of bad news that we’ve been given, basically the news that the world is ending, and are trying to roll with it,” she said.
“I read 1st day of school and my vibe was shot,” tweeted Kyle, 18, in late July, getting flagged by Social Sentinel to Katy school administrators.
“I don’t think a software can detect slang and other forms of communication,” he told BuzzFeed News.
Miley said that Social Sentinel’s algorithms do look for slang and context, but err on the side of caution in what is flagged to school officials. “Our model is conservative because we don’t want to miss anything,” she said.
Caught in the Net
Most of the posts that Social Sentinel flagged seemed benign. This tweet, from an elementary school basketball coach, was among those flagged as a potential threat to Fairfield City School District in Ohio. (The coach didn’t return a request for comment.):
More problematic for school safety officials are posts containing violent language for which it’s hard for algorithms and humans to determine a credible threat. One Twitter user triggered repeated alerts to Flagler County Public Schools in Florida over several weeks in late 2018, tweeting about killing, rape, and stabbing — sometimes nonsensically. One post said: “Here you kill me first and then after I die I kill you.”
The most obviously disturbing post among the hundreds that BuzzFeed News reviewed was also sent to Flagler in December 2018. A user whose screen name was redacted by the district tweeted: “i’m going to kill myself. sooner or later it will happen. i promise that.”
Flagler County Public Schools said it was no longer using Social Sentinel and wouldn’t comment on its security measures.
It was the Santa Fe shooting that prompted the Clear Creek Independent School District, southeast of Houston, to sign up for Social Sentinel. “At the time, we were in a very heightened state,” Elaina Polsen, the district’s chief communications officer, told BuzzFeed News.
But its school safety officers have found themselves in the difficult position of investigating and reporting potential cries for help, similar to the redacted Flagler alert, from adults who had no connection to the district’s schools.
“We were committing staff time into it, just because we weren’t going to ignore them,” Polsen said. In a year of using Social Sentinel, which cost the district more than $64,000, it didn’t receive a single alert highlighting a credible threat.
“There was nothing in the data we received that was actionable on our end,” Polsen said. “We would equate it to chasing ghosts.” Clear Creek told Social Sentinel it was canceling its contract in June.
Some school districts told BuzzFeed News that Social Sentinel had drawn their attention to students with suicidal thoughts.
“The day-to-day usefulness of Social Sentinel is very, very minimal,” said Chris Wynn, director of security at a school district in California. “But within our first month of having Social Sentinel in place, we had a suicidal student post where we intervened, and I’m pretty sure we would not have caught that without it.”
Wynn had been researching social media monitoring software for months before entering a contract with Social Sentinel.
“There are probably other things that we do that are more invasive than social media monitoring,” Wynn said. “For example, we do random metal detector searches.”
He sees it as one of many tools that are helpful in protecting students — both from outside threats and potentially from themselves. “Most of what we get is not shooter or security threats. It’s self-harm. It’s kids with low self-esteem,” he said.
Anna Nolin, superintendent of Natick Public Schools in Massachusetts, told BuzzFeed News that Social Sentinel alerts kept officials abreast of two fast-moving incidents earlier this year. In March, an 18-year-old man drove through the Natick High School campus, shouting racial slurs and firing a paintball gun. Students tweeted about it immediately. The next month, the school was briefly put on lockdown after someone contacted a veterans suicide hotline falsely claiming to be at Natick High School with a gun, planning to shoot himself. The hoaxer, who was a student, also gave a cellphone number of a girl at the school.
“The Twitter feed is faster than our emergency communications,” Nolin told BuzzFeed News. During the second incident, she could see from Social Sentinel that parents were reacting to rumors about a shooter on the school tennis court and tweeting at their kids to ignore teachers’ instructions to move to assembly points outside. She called the principal, who was able to make an announcement on the school’s public address system.
Several school officials said that software like Social Sentinel was part of a suite of tools that helped them identify students with mental health issues and those who were in need of counselors.
Our analysis also revealed that Social Sentinel may have major blind spots. Of the 151 posts for which we had metadata on why they were associated with the schools involved, 128 of them were from users whose profiles indicated a nearby location. Forty-six users followed a social media account linked to the school, and five posts were flagged because of keywords related to the schools. (Some posts were associated with a school for more than one reason.) So anyone who doesn’t note their true location in their profile, or doesn’t follow a school-related account, seems unlikely to be flagged by Social Sentinel.
But the most striking finding was the fact that 1,180 of the 1,206 alerts for which we could see the social media platform involved Twitter. There were 15 posts on YouTube, eight on the now-defunct social network Google+, two comments left on schools’ own Facebook pages, and just one Instagram post, from a drone photographer.
Social Sentinel says it monitors a billion posts a day from platforms including Instagram, Twitter, YouTube, and Flickr. But it focuses on public posts, using data from official feeds, rendering it largely ignorant of potential threats on Instagram, Facebook, Snapchat, and other popular platforms with privacy settings that make many posts invisible.
“We believe Snapchat would be an important platform to include in our scanning solutions but they don’t allow it,” said Miley, the Social Sentinel spokesperson. “We assess publicly available data from the social media partners in accordance with approved use cases.”
Some of the most notorious school shooters who left public warning signs on social media did so on social networks other than Twitter. The suspect in the Parkland shooting posed with a gun and knives on Instagram, while the teen charged with the Santa Fe killings was pictured on Facebook wearing a T-shirt bearing the phrase “born to kill.”
Several school administrators told BuzzFeed News that the lion’s share of threats they see on social media are on platforms that are more popular with teens than Twitter and that have higher privacy settings — in particular Snapchat.
Snap, the company that runs Snapchat, told BuzzFeed News that it was concerned abut the privacy implications and legality of sharing data with third parties, adding that safety concerns can be reported from within the app to Snap, which can alert authorities.
On Sunday, Aug. 25, police in Winter Springs, Florida, a suburb of Orlando, received reports about a Snapchat post from a 14-year-old boy, posing with what turned out to be a BB gun, captioned, “dont come to school tomorrow.”
School officials were notified, and after the threat was determined not to be credible, Winter Springs High School called parents with the message: “Parents, please remind your students that making comments of this nature, even if made in a joking manner, is something that will not be tolerated and will be immediately reported to Law Enforcement.” The boy was arrested and was charged with making threats to kill, do bodily injury, or conduct a mass shooting.
Winter Springs High School is part of Seminole County Public Schools, which is a Social Sentinel client. Because the company doesn’t have access to Snapchat data, the boy’s post wasn’t flagged. The system did alert the district to a comment posted shortly afterward on the Winter Springs High School Facebook page by a concerned parent, who wrote: “It is illegal what he did and im not comfortable with my son going to school With someone that thinks its a joke to shoot the school up.”
Seminole County Public Schools declined to comment on that incident, but district spokesperson Michael Lawrence told BuzzFeed News by email: “There’s a variety of ways for us to get alerts and notifications of school threats aside from Social Sentinel.”
Social Sentinel’s focus on public social media posts is supposed to ease concerns about privacy. But on the Facebook group “A Better Legacy For Katy ISD,” which describes itself as a “citizen watchdog group that holds Katy ISD accountable,” there was still backlash when the system was about to be introduced.
“How can they monitor my social media? Just because I live in the boundaries doesn’t give them the right!!!! This is ridiculous!!!!!” posted Lori Coddington-funk.
“So we are going to pay big MONEY to a surveillance company to spy on our kids? That’s the solution? Absolutely NOT acceptable. This is not a solution I want or need,” commented Anna Lisa Gendron.
In all, eight of the school districts we contacted had canceled the service. Some, like Katy and Clear Creek, said they got much better intelligence from anonymous online tip lines, which school districts in Texas are mandated by law to operate. (Social Sentinel has also started offering schools an anonymous tip service.)
Within a week of opening its tip line, Katy spokesperson Maria Corrales-DiPetta told BuzzFeed News that administrators had received 100 tips. Polsen said that Clear Creek had received 27 tips related to school safety since the start of the school year in August. Many concerned threats made on Snapchat or by text message.
“That’s the world that we’re dealing with,” Polsen said. ●
Additional reporting by Jeremy Singer-Vine.