Facebook’s Trigger Words

Share: Share on Facebook Share on Linkedin Share on Twitter

From October 27 through Election Day Facebook will ban all new ads about social issues, elections or politics unless they have gone through a detailed authorization process. The rationale, says the company, is to protect the integrity, authenticity and transparency of the US elections. Through a multi-channel campaign—web content, webinars, videos, social media—Facebook has outlined what advertisers can and cannot say during this “ad restriction period” and laid out instructions for being in compliance.

At Carpenter Group, we are committed to abiding by these restrictions when publishing ads for both our clients and our agency. Indeed, we applaud Facebook for shutting out ads that could skew what many have ranked among the most consequential presidential elections in US history.

But while the ad restriction period is still more than two weeks away, we’ve been dealing with Facebook’s posting constraints for the last few months. In itself, that might not be cause for concern or protest. Whether now or in the week before Election Day, Facebook has both a right and an obligation to monitor and appropriately limit what goes on its site.

But the company’s use of Artificial Intelligence in this regard has been maddeningly arbitrary and inconsistent. Typically, all it takes is a single “trigger” word, phrase or image to scuttle an ad, regardless of its context. It’s not possible to avoid this trap by consulting a published list of prohibited words, because there isn’t one. Language that’s allowed in one setting—think “election,” “COVID,” or “Black lives matter”—may automatically set off alarms in another. And an ad that has run for months without raising an algorithmic eyebrow can suddenly and inexplicably be tossed out. For example, ad copy that reads “How your brand stands up to COVID-19 may have lasting impact,” is clearly promoting marketing thought leadership without partisan or ideological bias, but would potentially be rejected.

A case in point: An ad we created for a client—and ran multiple times without incident—promoting the article, “Are You Prepared for Tax Law Changes?” was recently rejected by Facebook’s robo-censor without explanation. It wasn’t immediately clear to us whether it was due to the presence of the innocuous word “tax” in the headline, or an accompanying image of a federal tax return.

When this sort of thing occurs—and it’s happened to us  and our clients with increasing frequency over the last few months—you have the option to appeal and request that a human reviewer take a second look. But whether the reviewer rescinds the ban or retains it, don’t expect a rationale any more illuminating than “This ad violates Facebook’s policies.” There is no secondary appeal.

In this case, we pled our case to Facebook and the ban was lifted—but the appeal process can be laborious and time-consuming and end up screening out worthwhile, fact-based content without necessarily keeping Facebook off limits to bad actors. If you’re a charitable organization, NGO or other valid entity and your ad is timely, you may lose the window of pertinence.

To be sure, Facebook’s intent is logical. We get it. After all, the company has rightly faced intense criticism for providing conspiracy-mongers and racists with a forum for their toxic screeds, lies and fulminations. They clearly have a duty to bring their best efforts to bear in shutting them out.

But Facebook’s approach is to throw out the peach with the pit, the wine with the cork and, yes, the baby with the bathwater. Surely there’s a better way.