• Breaking News

    Here's how advertisers are getting tech companies to clean up their act or risk losing billions

    Advertisers haven't been afraid to pull money out of Facebook or YouTube campaigns, following the exposures of controversial content hosted on the platforms, but they always seem to come crawling back.

    They're caught in a Catch-22. Tech companies may get caught hosting content like terrorist videos or targeted comments from pedophiles, but their massive audiences make their platforms next to impossible for advertisers to quit in the end.

    But at least one major advertising group, representing close to a trillion dollars worth of buying power a year, has started to formulate plans to get tech companies to clean up their mess.

    The World Federation of Advertisers (WFA), whose members include PepsiCo, P&G and Diageo, called on its members to put pressure on platforms to do more to prevent their services and technology from being "hijacked by those with malicious intent" in brands' capacity as "the funders of the online advertising system."

    Raja Rajamannar, MasterCard's CMO, took the role of president of the WFA in late March. Since the internet companies receive so much revenue from advertising, "They cannot completely ignore the rightful preferences of the advertisers," he said in an interview with CNBC this week.

    And he said it's about a broader issue than just providing a "safe" place for brands. For something like the New Zealand shooting that was live-streamed on Facebook and passed around on YouTube and Twitter, "it is not a brand safety issue. It's a societal safety issue, and as marketers we have a responsibility to society."

    The WFA in late March urged its members to "think carefully about where they place advertising" and consider a moral responsibility bigger than the effectiveness than social media platforms for brands. The call came after reports of comments from pedophile groups on YouTube videos, content regarding self-harm and suicide on Instagram and the live-streaming of a mosque shooting in New Zealand on Facebook.

    But despite the problems, walking away isn't as obvious as it might seem to be, Rajamannar said.

    "There are these big social media giants who have got a humongous reach, they've got a humongous ability to precisely target the right kind of an audience, which you cannot ignore," he said. "You cannot walk away from that scale just like that."

    The potential trade-offs to that scale have been more evident for marketers in recent months.

    "Do you want live streaming of a shooting happening? You definitely don't want that," he said. "There is some tangible action that is happening. Is it adequate? No. And should it be expedited? Yes."

    He said the request for platforms in the near-term is a clear plan.

    "We are saying, 'Show us the game plan.' We want to see the game plan clearly," he said.

    Rajamannar said one common answer from platforms to how they're fixing content issues is that the companies are adding more people. Facebook and YouTube have hired thousands of individuals in recent years to monitor content on their respective platforms.

    "That's not exactly a plan. So we are saying, is it a technology-based solution … Is it people-based? Is it a hybrid? We are asking them to think through their strategy and come and share with us," he said.

    Last week at the Association of National Advertisers Media Conference in Orlando, Procter & Gamble Chief Brand Officer Marc Pritchard said his company plans to direct money toward platforms that exercise control over content and comments, including linking opinions with a poster's true identity and ensuring balanced perspectives. He didn't say he'd pull all spend from platforms that don't do those, but said P&G's "preferred providers of choice" will "elevate quality, ensure brand safety and have control over their content."

    Rajamannar said he agrees with Pritchard, but said it isn't necessarily easy to accomplish.

    "In principle, I agree with Marc," he said. "How you get there, there is not one single route."

    He said the industry can try in multiple ways. One might be shelving the entire ad ecosystem and "rebuild it from zero." He said that's a possibility, but a challenging one. "You have a business to run. You have results to deliver."

    He said the advent of technologies like blockchain also hold promise.

    "But you start with theory and then see how you can make it practical," he said. "Even as we are trying to re-imagine the entire ad ecosystem, you also want to make sure that you … refine and make the current ecosystem very viable, very safe and it should be transparent."

    Facebook's VP of global marketing solutions Carolyn Everson said in a statement in response to Pritchard's comments: "We applaud and support Marc Pritchard's sentiments for again making a bold call for our industry to collectively do more for the people we serve. We continue to invest heavily in the safety and security of our community and are deeply committed to ensuring our platforms are safe for everyone using them."

    In response to the WFA's call for platforms to better manage the harmful content ads can appear next to, Facebook pointed to a recent blog post from its COO Sheryl Sandberg, which outlined steps including restrictions on who can go "Live" and using artificial intelligence tools to identify and remove hate groups.

    Google didn't respond to a request for comment.

    Brands are obviously invested in ensuring their ads don't appear next to nasty content. But Rajamannar said social media companies also have those concerns.

    "The social media company doesn't want that to happen either," he said. "The intentions are not bad. The intentions are very good. But the key thing is how do you translate that good intention into action that gives you the right outcomes that you're looking for which is brand safety."

    Before these issues are ironed out, he said marketers do have some options, like choosing reputed publishers or the use of whitelisting or blacklisting. He said another nascent option is third-party technology that would work in a programmatic setting so an advertiser doesn't bid on an ad if it's on a bad piece of content.

    "The social media folks, they are trying to put more people to look after, they're trying to improve their algorithms, and all this stuff," he said. "But the whole situation for the brand safety, there has been some movement in a positive direction, but it's not adequate."

    No comments