Are Google’s platform policing strategies up to scratch?
Brands across Europe instructed Google to pull their ads from its YouTube platform, and more specifically its programmatic ad placement system, last Friday. as the search giants’ AI continually posted ads for the likes of The Guardian newspaper, ITV, Heinz and Deliveroo, amongst other advertisers, to videos that contained extremist content or hate speech.
Sites promoting terrorism and anti-semitism have been deemed acceptable by Google’s algorithms for some of its biggest advertisers, and the news is not going down well, although Google has already apologised.
“Recently, we had a number of cases where brands’ ads appeared on content that was not aligned with their values. For this, we deeply apologize,” wrote Google’s Chief Business Officer Philipp Schindler.
“We know that this is unacceptable to the advertisers and agencies who put their trust in us. That’s why we’ve been conducting an extensive review of our advertising policies and tools, and why we made a public commitment last week to put in place changes that would give brands more control over where their ads appear.”
Schindler is promising “a tougher stance on hateful, offensive and derogatory content”, so expect the platform to change its approach to user led content by installing tighter controls over what it is acceptable to publish.
Some say the blame lies with the programmatic advertising algorithms which allow ads to be booked by a machine rather than a human being, but without the ability to discriminate what is deemed appropriate for advertisers on a more human level.
YouTube say they will install account level controls and the ability to exclude certain types of content but in many ways the video platform is passing the buck back to advertisers whilst it struggles to adapt to the problem.
Google are promising to hire more staff to address the issue but will say they will continue to employ AI and machine learning to “increase our capacity to review questionable content for advertising.”