How brands can better ensure they’re funding legitimate news on social platforms

How brands can better ensure they’re funding legitimate news on social platforms

Can AI-enabled tools help to provide suitability at scale on social platforms?


In times of crisis, factual reporting is more important than ever, and the spread of misinformation can have damaging consequences for society. Health misinformation is a particularly prevalent example that gained traction during the COVID-19 pandemic, with social media users unknowingly sharing inaccurate information relating to vaccines.

While steps are being taken to mitigate misinformation, such as TikTok banning paid political posts on the platform, a robust, far-reaching approach is required to properly eliminate it. So, what role can brands play in the battle and how can they ensure they’re funding legitimate news on social platforms in times of crisis?

Not all content is created equal

With a global audience reach of almost 92%, video content has fast become the preferred media format among internet users.

Video is often user-generated content (UGC) and has created a minefield when it comes to identifying misinformation compared to the open web.

It’s harder for brands to stay on top of false claims within content that is constantly changing, often rapidly shared and involves moving images and sounds. The stakes are also higher during times of crisis; for example, misleading videos and propaganda relating to the conflict in Ukraine have further underlined the necessity of content-level transparency.

Young people are increasingly relying on social media channels for news updates and have been reported to trust influencers more than politicians to tell them the truth. Influencers create a direct and intimate relationship with their followers, which means false information can be spread easily, even if it is done unwittingly.

Brands need to be aware of the power of UGC; while partnerships with influencers can bring new reach opportunities and product engagement, they must choose content creators carefully.

Reputation, ethics and money on the line

Brands must connect with their consumers and also broaden their reach to attract new audiences. For some, this means stretching beyond traditional media plans and exploring what social platforms have to offer.

Luxury brands such as Gucci have tapped into UGC opportunities on TikTok, jumping on the #GucciModelChallenge trend and teaming up with content creator Francis Bourgeois to boost engagement.

When harnessing the power of social media, it’s important that brands accurately monitor their content adjacency so they align themselves with relevant audiences and factual content.

Not only is it damaging for brands’ reputations if their creative is aligned with misinformation, but it’s also unethical to fund such content — particularly in times of crisis.

Research has shown that 85% of consumers would boycott their favourite brands if their ads appeared next to conspiracy theories, for example.

In the current financial crisis, brands can’t afford to lose consumers or waste their budgets. Brands must make every penny count, and if their creative is aligned with misinformation, they are likely to reach irrelevant audiences and misuse precious inventory.

The human and AI ratio

Historically, misinformation has been difficult to address because methods of identifying such content were labour intensive and hard to scale.

Brands have mainly relied on broad keyword blocklists to detect misinformation, but as they’re often not regularly updated they can be inaccurate and miss the nuances present in video content.

This has led to brands being overcautious and blocking even safe content, meaning they miss out on valuable inventory and audience interactions.

For example, YouTube is home to many grime music videos which are often blocked because of profanity. However, this is legitimate content from music artists that attract high levels of engagement.

As a result, brands may decide to accept bad language, for example, in exchange for targeting the relevant audiences watching this content.

AI-driven technology can effectively help brands monitor the suitability of their content adjacency on social media platforms at scale. These technologies can also take into account a brands’ own risk thresholds – using The Global Alliance for Responsible Media’s (GARM) standardised framework to establish “low”, “medium” and “high” risk content.

AI can decipher the sentiment of the content, not just the words used, giving a more robust suitability analysis. This data can be used to inform future brand safety strategies and continuously optimise the process.

However, human moderation should still play a large role; while AI is extremely valuable, there are still some meanings it can’t detect.

A powerful combination of both AI and human moderation means it’s easier for brands to target campaigns at scale without the risk of adjacency to misinformation, and to keep in line with ever-evolving trends on the world’s fastest growing platforms.

Fake news travels fast in our constantly evolving digital world. To avoid the trap of inadvertently funding illegitimate news sources, brands need robust, AI-enabled tools and to not underestimate the value of human moderation to succeed in suitability at scale on social platforms.

It’s key for brands to balance both going forwards; not only to protect their reputations and reduce wasted spend, but to maintain ethical advertising at all times, especially during times of crisis.

Emma Lacey is SVP EMEA at data and technology company Zefr.

Leave a comment

Your email address will not be published.




Media Jobs