Google revealed that it removed 3.1 billion ads for violating its policies on misinformation, hate speech and fraud over the past year as the world navigated an unprecedented crisis.
Among that group, 99 million ads removed were specifically related to “sensitive events” including COVID-19 and the U.S. election.
Like many social platforms, Google was quick to ban ads for in-demand products at the start of the pandemic last March, such as hand sanitiser, masks and toilet paper, to save consumers from price gouging.
As the crisis dragged on, Google created a new policy prohibiting ads referencing COVID-19 conspiracies and misinformation. “As we learned more about the virus and health organisations issued new guidance, we evolved our enforcement strategy to start allowing medical providers, health organizations, local governments and trusted businesses to surface critical updates and authoritative content, while still preventing opportunistic abuse,” Scott Spencer, VP of ads privacy and safety said in a blog post.
Another area where Google ramped up moderation was surrounding the 2020 U.S. election. Google temporarily paused more than 5 million election ads in the U.S. leading up to election day, verified 5,400 election advertisers and blocked ads on more than 3 billion search queries following the election, as misinformation spread about the results. “We made [the] decision to limit the potential for ads to amplify confusion in the post-election period,” Spencer said.
Restricted ads
Google also restricted 6.4 billion ads last year that it deemed “legally or culturally sensitive.” Five-hundred fifty million of these restricted ads violated legal requirements related to a regulated industry, 80 million of which were placed by alcohol brands.
Restricted ads are not removed from the platform, but “are only allowed on a limited basis,” the blog post says.
“Restricting ads allows us to tailor our approach based on geography, local laws and our certification programs, so that approved ads only show where appropriate, regulated and legal,” Spencer said. “For example, we require online pharmacies to complete a certification program, and once certified, we only show their ads in specific countries where the online sale of prescription drugs is allowed.”
Content moderation
Google added or updated 40 policies for advertisers and publishers in 2020, which led it to remove ads from 1.3 billion publisher pages, compared to 21 million pages in 2019.
The company stopped ads from rendering on more than 1.6 million domains, including 981 million URLs displaying sexual content, 168 million pages considered “dangerous or derogatory: and 114 million pages promoting weapons. Google also cracked down on fraud, citing an uptick in “cloaking” by fraudulent advertisers looking to promote non-existent businesses or scams.
Google introduced the advertiser identity verification program and business operations verification program to detect such behaviour. As a result of these new policies, Google increased the number of ad accounts it disabled for policy violations by 70% last year, from 1 million to over 1.7 million. The company blocked or removed 867 million ads blocked for cloaking, and an additional 101 million ads for violating misrepresentation policies.