Google brings AI tool to arsenal in anti-child sexual exploitation fight – for whose benefit?

Screen Shot 2018-09-04 at 2.22.43 PM
Pull-quote from The Verge article on Google’s new AI tool to increase content moderation productivity of some of the very worst material on the internet

An article from The Verge today described a new AI tool released by Google and intended to assist in the (sadly) unending battle to control the circulation of child sexual exploitation (CSE) material (also known as child sexual abuse material and, incorrectly, as “child pornography”) online.

This tool is intended to work alongside extant technologies like PhotoDNA and still relies upon the decision-making of a human content moderator. But unlike PhotoDNA, Google claims that the material does not have to be extant/known to a database already in order to be identified as CSE. It also appears that it is not designed to work alone, but is intended to provide an assist to human moderators who already do this work:

While historical approaches to finding this content have relied exclusively on matching against hashes of known CSAM, the classifier keeps up with offenders by also targeting content that has not been previously confirmed as CSAM. — Google’s press release, Sept 3. 2018

The release is short on the details of what goes into identifying the material as likely to be child sexual exploitation; my guess is that it’s going to be a constellation of factors. The Google press release describes the techniques being used as “deep neural networks for image processing,” in order to “assist reviewers sorting through many images by prioritizing the most likely CSAM content for review,” so, evidently, the main criterion is the makeup of the image itself.

This could come in form of the a number of content-related factors – pixel hue and percentages, file type, file naming –  but it’s possible that other aspects captured in metadata or in other ways could be analyzed by the tool: source or origin, designation, IP address…who knows. It could be that the focus is on a couple of very specific factors. These details aren’t being made known – assuredly in part because that could assist people in beating the tool. Several factors could be used and weighted in a variety of ways to make a determination on a given piece of content. But these are all just guesses. While Google’s tool appears to be highly sophisticated, just a few years ago I was in a meeting in which one executive from a social media firm described his firm’s Countering Violent Extremism (CVE) strategy for content moderation as not much more evolved than targeted IP monitoring, and banning.

The Google tool is not a one-stop solution; rather, it is a triaging and sorting tool that assists content moderators in identifying CSE as-of-now unknown to PhotoDNA, or could be used by people whose organizations are not already using PhotoDNA. In the coverage from The Verge, the deputy CEO of  UK-based Internet Watch Foundation, the NGO that intends to use this software, appears skeptical, but willing to try it. He also notes that this is a tool that will be used in content triaging, so humans will still be key in the process.

Of particular note, this tool will do nothing to stem the production or uploading of CSE; like PhotoDNA, it will stem the tide of its circulation. According to The Verge, “In one trial, says Google, the AI tool helped a moderator ‘take action on 700 percent more CSAM content over the same time period.'” This leads one to immediately wonder just how much of this material exists to be removed(1)? Further, how much was the moderator exposed to in the context of a 700% increase? How was that person supported when asked to complete these tasks?

Finally, does this tool actually help moderators who have to contend with this material, or does it simply help firms who use it meet liability and other kinds of goals to remove the material? How are the moderators being supported throughout this process? In Google’s press release, it makes reference to its own membership in a few industry-wide coalitions that, among other things, work to make conditions better for commercial content moderators. But, as I discuss in my forthcoming book, the efficacy of these coalitions’ suggestions and best practices (all of which are voluntary across select industry partners and not adopted anywhere as mandatory, to my knowledge, or really enforceable in any real way), has not been tested or measured publicly. The tool seems that it will likely streamline or increase productivity for human content moderators responsible for seeking out CSE content – but, as we know from the lawsuit underway in Washington state, exposure to said content, particularly long-term and/or at high volume, can have allegedly extremely deleterious effects.

Ultimately, the Verge piece sums up Google’s announcement nicely in its own pull-quote: “GOOGLE’S AI TOOL TRIAGES FLAGGED MATERIAL, HELPING MODERATORS WORK FASTER.”

  1. In a forthcoming article I’ve written about online abuse and the ideology of PhotoDNA and mainstream commercial social media, I quote extensively from PhotoDNA creator, Dartmouth-based computer scientist Hany Farid. In his own article on the genesis of this product, he states the following:

In 2016, with an NCMEC-supplied database of approximately 80,000 images, photoDNA[sic] was responsible for removing over 10,000,000 [child sexual exploitation] images, without any disputed take-downs. This database could just as easily be three orders of magnitude bigger, giving you a sense of the massive scale of the global production and distribution of [child sexual exploitation] (Farid 2018, p. 596).

Farid, H. (2018). Reining in Online Abuses. Technology & Innovation, 19(3), 593–599. https://doi.org/10.21300/19.3.2018.593