The struggle to ensure brand safety on social media platforms has taken a new turn with the recent advertiser exodus from X, the social media platform formerly known as Twitter. In a concerning trend for the company, Paris Hilton’s 11:11 Media has become the latest brand to pull out of a partnership, further highlighting the growing concerns over antisemitic content on the site.
Paris Hilton’s 11:11 Media was set to boost X through a high-profile campaign over a two-year period, in which Hilton would promote key platform features including live video, live e-commerce, and X Spaces (live audio). The collaboration, which had confidential terms, also included an interesting revenue-sharing component. Despite this potentially lucrative agreement, 11:11 Media stepped back, with president and COO Bruce Gersh citing the disturbing presence of antisemitism and pro-Nazi content as the reason for termination. CNN first reported on the crumbling of this deal, marking a significant loss for X.
X’s troubles with attracting and retaining advertisers are not isolated. Big names like Apple, Disney, Comcast, IBM, Warner Bros., Paramount, and Lionsgate have similarly paused or completely ceased their advertising spend with X, highlighting widespread concerns about adjacency to hate speech and the potential reputational risk.
Brand safety controls are critical for building and maintaining the trust of advertisers. X CEO Linda Yaccarino has assured advertisers of the presence of such controls, yet a report by Media Matters contradicted these assurances, showing ads running next to hateful content. While X has claimed that Media Matters manipulated the service, resulting in the undesirable ad placements, it also sued the watchdog for defamation, though it did not dispute the authenticity of the ads or the nature of the content they were found next to.
The presence of hate speech on social media platforms is a pressing issue, and X finds itself in a particularly precarious position. Even Elon Musk’s own interpretation of antisemitic conspiracy theories shared on the platform has added to advertisers’ wariness, causing damage to X’s reputation and financial stability.
This trend has been foreboding for X’s ad business, which was already anticipated to undergo a 54.4% year-over-year decline in worldwide ad spending from 2022 to 2023. Compounding these difficulties, Musk revealed a notable 60% drop in U.S. ad revenues after facing accusations of antisemitism from the Anti-Defamation League, which prompted him to threaten a lawsuit against the organization.
In a bid to revive its platform’s attractiveness and recover from the loss of ad revenue, X has attempted to innovate by creating special features such as a custom icon in partnership with Paris Hilton. However, given the magnitude of the advertiser retreat and the brand safety issues plaguing the platform, the future of X’s advertising model seems uncertain.
For advertisers and social media platforms alike, this serves as a sobering reminder that ensuring a safe environment free of hate speech is not merely a moral responsibility but also a practical necessity for business sustainability. As platforms navigate these complex challenges, they must prioritize robust brand safety mechanisms to retain the trust and investment of their advertising partners.
The case of X and its spiraling advertiser crisis is a cautionary narrative for all stakeholders in the digital advertising landscape: when content moderation fails, the ripple effects are felt across the revenue streams and reputations of those involved.