The Dark Side of Google: The Workers Who Have to Flag Offensive Ads

Meet the poor souls who make sure online ads don't appear next to the wrong offensive content.

May 22, 2017 1:46 pm
(Google)
(Google)

Some jobs at Google are not as resume-boosting as one might expect.

To prevent its ads from appearing next to something like an ISIS beheading video, Google hires people to watch offensive content.

In the ever-growing world of automation, flagging that offensive content still remains a job for humans.

Detailing his experience at as a Google rater, Lucas Peterson wrote an article for GQ that gives a window into his bleak time at Silicon Valley behemoth where he worked before eventually becoming a New York Times travel columnist. “Google will always need humans in what is becoming the modern world’s most Sisyphean task: to act as custodians of the Internet that we’re simultaneously soiling,” Peterson writes.

Quality evaluators, also called raters, look at content that violates Google’s terms of service—mostly for violence, hate speech, or pornography. And that means employees like Peterson have to sift through as many of those types of content as possible.

Once the content has been flagged, raters then look for clues in the website’s HTML code, its URL, metadata, and keywords that might help a computer do the task in the future.

Aside from a dark look at the ad-driven business that’s Google main source of revenue, Peterson’s take offers a window into the unexpected downsides of the so-called “gig economy.”

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.