YouTube is heading for its Cambridge Analytica moment

FAN Editor

This week, several major advertisers suspended their YouTube campaigns after a report that pedophiles have latched onto videos of young children, marking time stamps that show where children appear and objectifying them in YouTube’s comments section.

YouTube responded in its usual way, by criticizing the actions as “abhorrent” and sending a memo to advertisers outlining changes it’s making to prevent this kind of activity in the future.

The thing is, YouTube — which is owned by Google and is estimated to generate more than $10 billion a year in ad revenue — has had this class of problem for years, and whatever it’s doing, it’s not enough.

About five years ago, my wife and I were startled to find our three-year-old son watching a seemingly innocent “Thomas the Tank Engine” video that was dubbed over with disgusting language. Whoops! (Fortunately, my son wasn’t scarred, and has since learned to swear with great creativity and vigor by listening to me work around the house.)

We got away easy.

In 2017, advertisers pulled their ads after seeing them appear next to videos promoting terrorist content. An exec said at the time, “We’ve got a comprehensive review under way — we have for some time — looking at how can we improve here and we are accelerating that review.”

Last month, BuzzFeed reported YouTube was hosting images of graphic bestiality alongside kids’ videos. The company had already pledged to do a better job of catching and removing these videos back in April 2018. This time it reiterated that: “We’re working quickly to do more than ever to tackle abuse on our platform, and that includes developing better tools for detecting inappropriate and misleading metadata and thumbnails so we can take fast action against them.”

This week, after the pedophile comments scandal, BuzzFeed reported that the company had remove two cartoons from the YouTube Kids app that were spliced in with a man explaining how to conduct self-harm. In response, YouTube said it “work[s] hard to ensure YouTube is not used to encourage dangerous behavior and we have strict policies that prohibit videos which promote self-harm.”

You get the idea.

To be fair, YouTube has taken concrete steps to fix some problems. A couple of years ago, major news events were targets for scammers to post misleading videos about them, like videos claiming shootings such as the one in Parkland, Florida, were staged by crisis actors. In January, the company said it would stop recommending such videos, effectively burying them. It also favors “authoritative” sources in search results around major news events, like mainstream media organizations.

And YouTube is not alone in struggling to fight inappropriate content that users upload to its platform. Pinterest took steps last year to block misinformation about vaccines, but it was fairly easy for CNBC to find some search terms that the company had missed. Facebook and Twitter have been raked over the coals repeatedly for allowing their platforms to be used to spread everything from suicide videos to misinformation meant to sway elections or spur genocidal behavior.

The problem isn’t really about YouTube, Facebook or any single company.

The problem is the entire business model around user-generated content, and the whack-a-mole game of trying to stay one step ahead of people who abuse it.

Companies like Google and Facebook upended the traditional media business by giving regular people a friction-free way to upload and share whatever they wanted. As users uploaded masses of words and links and hours of video, these platform companies amassed huge audiences, then sold ads against them. When people can share whatever they want, these platforms turn into a mirror image of the human psyche — including the ugly parts.

These companies have human screeners who try to keep on top of the grossest material and take it down before it spreads too far. But it’s not practical — and may be physically impossible — to hire enough screeners to catch every violation, or to screen every piece of content before it’s posted instead of after. They are investing in computer algorithms and artificial intelligence as well, and these programs do work — there’s almost no porn or nudity on YouTube or Facebook, for instance — but they’re not 100 percent effective, especially for altered videos or political content.

TV, newspapers and other traditional media can get sued and fined by the government if they publish this material.

But tech platforms that rely on user-generated content are protected by the 1996 Communications Decency Act, which says platform providers cannot be held liable for material users post on them. It made sense at the time — the internet was young, and forcing start-ups to monitor their comments sections (remember comments sections?) would have exploded their expenses and stopped growth before it started.

Even now, when some of these companies are worth hundreds of billions of dollars, holding them liable for user-generated content would blow up these companies’ business models. They’d disappear, reduce services or have to charge fees for them. Voters might not be happy if Facebook went out of business or they suddenly had to start paying $20 a month to use YouTube.

Similarly, advertiser boycotts tend to be short-lived — advertisers go where they get the best return on their investment, and as long as billions of people keep watching YouTube videos, they’ll keep advertising on the platform.

So the only way things will change is if users get turned off so badly that they tune out.

We started to see some hints of this with Facebook after the Cambridge Analytica scandal last year. While Facebook had been caught violating users’ privacy dozens of times, the mere hint that a political consultancy might have used Facebook data to help elect Trump (although this is far from proven) set people off. Congress conducted hearings. Further privacy scandals got attention they never used to. People noisily deleted their accounts. Growth has largely stalled in the U.S., and younger users are abandoning the platform, although this might be more because of changing fashions and faddishness than any reaction to the scandals facing it. (Anyway, kids are still flocking to Facebook-owned Instagram.)

YouTube has so far skated free of any similar scandals. But people are paying closer attention than ever before, and it’s only a matter of time before the big scandal that actually starts driving users away.

In the meantime, if you post videos of your kids to YouTube, set them to private.

WATCH: Facebook, YouTube, Twitter remove accounts linked to Russia, Iran that created discord

Leave a Reply

Next Post

Medicare for All, or Medicare for More? Here's where the 2020 hopefuls stand on health care

Even in this early stage of the 2020 race, nearly all of President Donald Trump‘s would-be opponents have staked out their support for expanding health-care coverage. But not all of their proposals are alike. And while many of the 2020 challengers say they want to implement some form of “Medicare […]