|

Big Tech accused of 'hollow promises' in tackling disinformation

Big Tech accused of ‘hollow promises’ over disinformation

The European Commission’s revised Code of Practice on Disinformation does not contain strong enough language to be considered anything other than “hollow promises for favorable publicity”, according to misinformation watchdog NewsGuard.

The revised Code, released last week, aimed to set “a broader range of commitments and measures to counter online disinformation” than the original Code, which was established in 2018 to set self-regulatory standards on disinformation and misinformation online.

The European Commission, in partnership with the Code’s signatories, set out to revise the Code beginning in June 2021.

The “strengthened” Code now contains 44 commitments and 128 new measures on topics ranging from transparency in political advertising, demonetization practices, and fact-checking, among others.

NewsGuard, a signatory alongside major tech and advertising companies including Adobe, Clubhouse, Google, Meta, Microsoft, Reporters Without Borders (RSF), Twitch, Twitter, TikTok, Vimeo, and the World Federation of Advertisers (WFA), was nevertheless intensely critical of what it sees as an avoidance of accountability on behalf of major digital companies responsible for the publication and spread of mis- and disinformation around the world.

The watchdog points to the fact that only Microsoft among the large platforms committed to the Commission’s recommendation that it provide its users with access to “indicators of trustworthiness, focused on the integrity of the source” (an example of one such indicator is NewsGuard’s own “nutrition labels”).

Meanwhile, Meta, Google, Twitter, and TikTok refused to make the same commitment.

“They said the Guidance only stated that they ‘could’ empower their users, without this effective step being made mandatory,” reads NewsGuard’s statement.

Steven Brill, co-founder and CEO of NewsGuard, added: “By playing word games and emphasizing ‘could’ instead of ‘should’, these platforms are finally acknowledging that since the original Code in 2018 they have only been paying lip service to this vital – and readily available – measure […] that would empower users with information about the reliability of sources rather than continue to bombard them with content choices made by secret, unaccountable algorithms intended to empower their eyeballs-at-all-costs business model, rather than empower the people they are supposed to be serving.”

“This key initiative of the Code now appears to be dead and buried – until the Commission acts to end its naïve dependence on the willingness of these platforms to act in the public interest voluntarily.”

Multiple studies have shown that credibility cues, such as those provided by NewsGuard, improve the “news diets” of misinformation consumers.

Jake Dubbins, co-founder and co-chair of the Conscious Advertising Network, agreed with the sentiment expressed by NewsGuard.

“This looks like a very successful lobbying operation by tech platforms to make their responsibilities optional. A textbook example of writing the homework assignment and then marking it, too.”

Dubbins connects the failure to comprehensively stop the spread of misinformation online, despite the availability of tools that demonstrably reduce the issue without the need to turn to censorship, to the failure to create political and social consensus around developing climate change policy.

The Intergovernmental Panel on Climate Change (IPCC) published its latest report in February, which clearly stated: “Any further delay in concerted global action will miss a brief and rapidly closing window to secure a liveable future.”

Despite this urgency, Dubbins states that “the whole industry, legislators, and society generally are experiencing massive cognitive dissonance about the scale of the threat of misinformation to democratic institutions, to public health, to social cohesion, and to climate action.”

Mis- and disinformation has proven to be a significant issue regarding political elections and public health crises like the coronavirus.

The spread of misinformation regarding the lie that the 2020 US presidential election was stolen from Trump was, as the January 6 committee in the US is currently describing in detail, a major factor that led to the attempted insurrection of the US government.

The 2018 Facebook-Cambridge Analytica Scandal revealed how political advertisers for then-candidate Donald Trump and the UK’s Vote Leave campaign in 2016 used data illegally harvested from Facebook users to target individuals’ for political ads.

Since then, Facebook has attempted to change its tune, with spokespeople highlighting the company’s partnerships with more than 80 independent, certified fact-checking organizations, and efforts to reduce or remove content that contains false information, particularly relating to Covid-19 or content that is meant to suppress voting.

GroupM anticipates political advertising, including both ads for candidates as well as political issues campaigns, to reach $13bn in US adspend this year thanks to the midterm elections.

Dubbins continues: “This code effectively says that tech platforms ‘could’ choose to act rather than ‘should’. It means that we will be responding reactively to real-world harms of misinformation rather than proactively dealing with what is a huge and unprecedented problem. It makes it more likely that we will miss that closing window.”

It is worth noting that the revised Code does also take positive steps toward demonetizing disinformation, which major platforms have committed to. The companies will have seven months to submit a first set of reports to demonstrate the effectiveness of their revised actions.

But to NewsGuard and Dubbins, the Code represents a clear missed opportunity for much-needed regulation, especially given the ever-dwindling time constraints on humanity’s largest collective issue of climate change.

As Dubbins asks: “What does an ‘unliveable’ future look like?”

Media Jobs