ChatGPT does threaten journalism, but not in the way you might think
ChatGPT may not make journalists obsolete, but its generation of inaccurate or false information should make any news outlet nervous.
The world of work has had many a watershed moment. When email was born, for example. That was a big thing. Or that time we had a global pandemic and realised that, thanks to the internet we’d already had at our disposal for over 20 years, we didn’t need be in the office every day of the week.
A more recent one arrived on the 30 November 2022, when ChatGPT was introduced to the world. Ranging from panic that it would steal our jobs to derision at its shortcomings, reactions to the chatbot technology were polarising, to say the least.
Even so, it has quickly become the fastest growing web platform ever to be released, registering a stomach-churning 100 million users in the first two months of its launch. For context, it took TikTok nine months to reach this number. For Instagram, it was two years. And despite still being very much in its infancy and regularly suffering from what some call “whale fail” (when traffic to a site has grown to such gargantuan proportions that the servers simply can’t keep up), it’s managed to find its way into the Top 50 most visited websites in the world, according to digital-adoption.com. Number 44, to be exact.
Of course, given that it can produce a massive amount of text in mere seconds, the narrative surrounding its threat to journalism was acutely doomsday-ish in the beginning. Cries of “it’s coming for our jobs!” could be heard far and wide, with everyone from The Atlantic to Al Jazeera to The Media Leader’s own Raymond Snoddy predicting this outcome.
Far from the Pulitzer
But I, like many journalists, have come to realise that it isn’t as simple as that. Despite claiming that it has “the potential to automate certain tasks traditionally performed by journalists, such as generating news articles” (when I asked it how it might affect journalists’ jobs, this was its response) it’s nowhere near able to do the job of a reporter or even to craft a story with the nuance and feeling of a person.
Various media organisations have dabbled with the bot, curious to see what it might be capable of, only to find its attempts at articles riddled with gross factual errors. In fact, when The Media Leader’s editor, Omar Oakes asked the bot to write short biographies for two of his reporters, it claimed they had both won the Pulitzer Prize. Whilst I’d never suggest this couldn’t possibly be true in the future, it certainly isn’t yet.
But even if news organisations have cottoned onto the fact that ChatGPT probably has as much journalistic capability as the 16-year-old on two weeks of work experience, they are looking into its potential to support staff with tasks such as summarising large bodies of text (research papers, for example), transcribing interviews and fact checking. Using it as a means to cut down on the time-consuming, menial tasks that journalists have to perform could be useful, but we’re going to have to fight quite hard to ensure the public knows that using it as a source of factual information is a disaster waiting to happen.
Lack of credibility makes it a credible threat
This, to me, is where the chatbot – and any other similar AI iterations launched by the Big 5 – poses a real threat to journalism. Not only because it gets stuff wrong, but because it also doesn’t link back to or even cite its sources – something that will affect both the bottom line of news organisations (the bot doesn’t compensate any of the new sources it pilfers from) and the integrity-driven relationships they have with their audiences.
Journalists are duty-bound to clearly state where they’ve got their information from, yet AI chatbot technology is under no such constraints. In fact, when asked about its, ChatGPT stated that it has been “trained on a large corpus of text data, including news articles from various sources,” but that it doesn’t have the “ability to verify the accuracy or credibility of the information” it retrieves.
As a journalist, it’s in my nature to ask these kinds of questions, but is a general user going to know to do the same? I don’t know. The idea that our news could be generated by algorithms that are not beholden to facts or verified sources is terrifying. To me, it goes against everything that journalism is supposed to stand for — to seek the truth of a story based on reliable information so that it can keep the public informed.
So whilst I don’t believe ChatGPT is about to make journalists obsolete, I do think it poses a threat. Fake news and disinformation is already rife, but with the bot generating stories from a muddy mix of uncited sources, we could be entering an era where news just becomes one giant game of Chinese whispers.
Bianca Barratt is a freelance journalist and editor and writes features across business, lifestyle and culture. A former lifestyle writer and shopping editor for the London Evening Standard, she is now a senior contributor to Forbes Women and has written for titles including The Sunday Times, Independent, Cosmopolitan, BBC Good Food and Refinery29. She writes for The Media Leader each month.