Google promises an announcement to how it is tackling the funding of extremist content via ads on its platform "in the coming days."
A growing number of brands in the UK — including the government, L'Oreal, McDonald's UK, HSBC, and ad agency Havas UK on behalf of all of its clients — suspended their advertising from YouTube and Google this week over fears their ads were appearing next to questionable content and funding their creators.
Google's executives were summoned to appear in front of the UK government last week after ads for taxpayer-funded services were found next to extremist videos, following an investigation by The Times newspaper. Google must return later this week with a timetable for the work it is doing to prevent the issue from occurring again.
On Monday, at a breakfast briefing with journalists before he took to the stage at Advertising Week Europe — Brittin said the annual ad industry event gave Google a "good opportunity to say first and foremost, sorry, this should not happen, and we need to do better."
Brittin added: "There are brands who have reached out to us and are talking to our teams about whether they are affected or concerned by this. I have spoken personally to a number of advertisers over the last few days as well. Those that I have spoken to, by the way, we have been talking about a handful of impressions and pennies not pounds of spend — that's in the case of the ones I've spoken to at least. However small or big the issue, it's an important issue that we address."
The advertiser boycott of YouTube has mainly been limited to brands' UK spend, but Brittin conceded that the company is "having a conversation with global players as well," and that any updates it makes to its systems would be effective worldwide.
A "comprehensive review" is under way that has "accelerated" in recent days, Brittin said, adding the company had invested "millions" and has thousands of people who work on policies, controls, and enforcement to ensure bad ads — or bad ad placements — don't make it into the system.
However, Brittin explained that the issue of brand safety is not as simple as it may seem. If Google excluded any content related to "war" from YouTube, for example, that would also exclude important documentary content about war zones. Some 400 hours of video are uploaded to YouTube every minute and thousands of sites are added to the AdSense network each day, which makes policing difficult. And Google has no intentions of curtailing the free speech of its users.
Brittin said: "It's not our job to be a censor, it's for the government. So you will find online content that you violently disagree with, that you find incredibly distasteful, but that is a legitimate point of view and not illegal. And that is one of the joys of the web and the voices that are there. That's different to the issue of what's safe for advertisers, which is more tightly defined."
Brittin said Google plans to "raise the bar" when it comes to the policies that decide which content on its platforms is safe for advertising.
Google will also make its advertiser controls easier to use, including changing some of the default settings to be more stringent on brand safety.
Brittin added 98% of content that does not meet its policies is removed within 24 hours, but he added that "we can get better on that as well."
Google declined to provide a timetable on when the changes will be announced, but Brittin said the company would provide more detail "in the coming days."
UK advertiser trade body ISBA suggested last week that one solution to the brand safety problem on YouTube could be to put all newly uploaded content in quarantine until Google manually classifies that it is appropriate to serve ads against it.
When Business Insider put that solution to Brittin, he responded: "That's a kind of example of the kind of things we are looking at all the time. We don't want to go into too much detail because one of the challenges for us is that we want to have a system that works, but you don't want to have a system that's easy to game for people. We don't want to say too much about what we are considering and rolling in or out."
But he added: "I think that's a really helpful example of ISBA thinking about how can you improve on this and it demonstrates the kind of dialog we are having with them, with the big agencies, with advertisers large and small to look at how we can improve. It's a really thoughtful suggestion and we are looking at a range of things: controls, policies, and enforcement."
Brand ads appearing in inappropriate places online is not a new phenomenon, but the issue was thrust into the spotlight after an investigation from The Times in February found a number of big brand ads were appearing next to violent or hateful content on YouTube. Those brands were also inadvertently funding the content creators through YouTube's ad revenue share scheme.
The issue is not just confined to Google, but programmatic advertising in general.
When brands pay for online ad campaigns, they usually do not buy each ad placement individually. Instead, they use a method called programmatic that uses automated systems to target large audiences across a swathe of websites or different YouTube videos.
Programmatic advertising is seen as an efficient way to reach specific audiences online, but it can also risk some ads inadvertently appearing next to undesirable content if proper whitelists, blacklists, and other safety checks are not put in place by both the ad platform and the ad buyer.