For many years, schools have used “censorware” to suppress dirty words, threats, and other undesirable communication on their data networks. The results have sometimes been comical and usually bad. In some versions it’s known as the “Scunthorpe problem,” referring to software that finds dirty words in substrings of harmless ones, such as “Matsushita” and “cockle.”
As technology advances, these tools don’t get better, only more intrusive. A lawsuit filed by students in Lawrence, Kansas has brought one of them to public attention. It’s called “Gaggle,” perhaps a portmanteau word for “gag Google.” An attorney representing the students says, “Students’ journalism drafts were intercepted before publication, mental health emails to trusted teachers disappeared, and original artwork was seized from school accounts without warning or explanation.”
Buzzfeed News has an piece on Gaggle. According to the article, “It plugs into two of the biggest software suites around, Google’s G Suite and Microsoft 365, and tracks everything, including notifications that may float in from Twitter, Facebook, and Instagram accounts linked to a school email address.”
These words and phrases were reportedly flagged as “questionable content” in an Illinois school district: “knife,” “cutting,” “call the police,” “hang,” “lesbian,” “queer,” “gay.” Questionable content “results in an email notification sent to the school or district’s specified contacts.”
Lawrence High School even prohibited student journalists from reporting about Gaggle. Sounds as if the officials know there’s a problem and think they can keep the world from finding out.
According to a motion filed Friday by the students’ attorneys, Lawrence High School Principal Quetin Rials prohibited The Budget [a student publication] and its student journalists from reporting on the lawsuit or any Gaggle-related issues, which they have covered extensively for months.
Those officials are using the “For the children!” argument, reportedly telling the students that “their arguments meant they valued student press rights over efforts to curtail student suicide.” The efficacy of a suicide-curtailing system that blocks or casts a chilling effect on discussion of traumatic issues seems dubious. Is the theory that if students can’t talk about it or report about it, it will stop happening?
Making students afraid to talk about such issues may well do more harm. People usually clam up about anything sensitive when they think they’re under surveillance.
In one case, Gaggle flagged almost everyone in an art class for alleged nudity in their work. It deleted the files, thus making it impossible to prove its accusations true or false. The students were brought in for disciplinary action. A photography teacher said, “I wish we would have had more of a warning. What is going into someone else’s work where they are supposed to have a right to be able to voice something, I think is a little scary.” It isn’t clear from the article what happened to the students afterward, but there’s no mention of punishment, so the school officials probably realized it was just censorware running amok.
Could the next step be letting the software decide the punishment? We already see that on social media. The old Twitter suspended users for software-claimed violations; if they were guilty, they could confess and be back on with minimal consequences, but maintaining your innocence kept you off until the software hiccuped and let you back on. (This happened twice to me.)
Schools need to respond when students threaten harm to themselves and others, but this doesn’t justify blanket surveillance of all student communication and documents or software-ordered deletion of their work. The tendency of software tools to produce false positives makes the prospect especially bad.