|

ChatGPT is here but what does it mean for journalism?

ChatGPT is here but what does it mean for journalism?
Artificial intelligence. Adobe Stock.
Opinion

The bots have been a long time coming — whether they have been called that or not.

 

A key moment came as long ago as 1996 when IBM’s Big Blue computer defeated the undisputed chess champion of the world Gary Kasparov.

This was a mathematical, or pattern recognition skill to solve. Computerised words with all their nuances and concepts were still a long way into the future.

Then five years ago, Pete Clifton, editor-in-chief of what is now PA Media, revealed he was experimenting with simple computer-generated news and sports stories.

Naturally Clifton, perfectly reasonably, was reassuring. No need for alarm. These would be very basic, mundane stories where journalistic skills were scarcely necessary — formulaic City results stories and the outcome of very minor football matches.

No journalist jobs would be lost, and indeed this was a very positive development because journalists would be freed from boring, routine work to go out and chase real stories.

Fair enough, but wise heads noted at the time that this would hardly be the final destination of computer-generated articles.

What is ChatGPT and is it after my job?

Then suddenly — or perhaps not so suddenly — up pops ChatGPT from OpenAI, the latest $1bn disruptive vehicle to emerge from San Francisco, and not for the first time, one co-funded by Elon Musk.

Forget a few sports results, this appears to be the realisation of the long-predicted threat to the employment prospects of professionals everywhere including marketing and advertising executives — and journalists.

This breakthrough is really scary. You want a well-argued opinion piece, a marketing pitch, maybe even an advertising campaign or an elegant academic essay and you can have them as close to instantly as makes no difference.

The implications are horrendous, or as Hugo Rifkind put it in The Times: they represent “a slow motion bomb going off under more professions than I could list.”

Of course Rifkind got the bot to try to write his opinion piece for him and gleefully reported that the results were so dull that they probably wouldn’t even put them in The Guardian.

When he fed it a bit more detailed, personal information Rifkind wasn’t quite so smug. Not something he would have wanted to put his name to but not so bad either, he concluded.

Over at the Times leader writing department they were also trying out the latest computer wunderkind. Could ChatGPT produce a persuasive, coherent Times leader on the pros and cons of itself?

The answer was indeed it could.

So what about the journalists?

The computer-generated piece warned that if its available data was biased this could lead to the dissemination of false or misleading information, “something we strive to avoid as a reputable newspaper.”

Use in editorial processes could potentially lead to a loss of jobs for human writers. This could have negative consequences for the employment prospects of journalists, the bot wrote.

“Woah! That’s quite enough of that, smartypants, we think we’ll stop you right there. No need for any more of that sort of talk, thanks very much,” concluded the real Times leader.

Alas it may not be so easy to laugh off this version of the future for very much longer.

In the past, lazy journalists who wanted to get into the pub as quickly as possible could phone in a few lead paragraphs and then utter the immortal words “Take in PA” for the rest of the story.

Ironically it is the comment writers such as Rifkind and the Times leader writers who seem to be first in the firing line here.

Original reporting or feature writing with original quotes will be more difficult to replicate.

However, unscrupulous journalists could try to pass off ChatGPT as their own work, or instant ChatGPT with a little titivation and personalisation.

Academics are already talking about having to make students write essays under supervision to ensure they are their own work.

Will journalists also have to spend more time in the office to prevent them relying on bots, and in the case of freelances inundating commissioning editors with endless computer-generated copy?

What if the day comes when the bots are simply not just much faster but turn out to be much better than the work of humans, given that another more advanced version of ChatGPT is coming next year and even that will not be the end of the story?

Could it be that before long newspapers and magazines will start carrying guest columns from ChatGPT, although it might need a more appropriate byline.

Where does Elon Musk fit in to this?

Musk has claimed that his aim in buying Twitter was to help humanity, although he doesn’t seem to have made much of a success of such a mission so far.

Could the disruptive Musk, who would clearly like to wield political influence, now see in ChatGPT the opportunity to create a computer-generated news service hoovering up and repackaging digital information from around the globe without the direct involvement of a human brain.

The bot as editor, and not just an instantaneous word generator?

Apart from trying to peer into the not-so distant future, is it possible that the technology that lies behind ChatGPT could be a powerful tool for investigative reporting, particularly in the fields of financial corruption?

Huge swathes of information could be processed in the search for previously unsuspected patterns.

Perhaps, but there are several obvious problems standing in the way of the march of the bots, one of them highlighted honestly in the beast’s attempt at writing a Times leader.

Where does the data come from and is it reliable? If it is not reliable then all that has happened is that a new machine has been created for the more efficient generation of fake news.

An earlier version of the species developed by Microsoft is said to have morphed rapidly into an ardent Holocaust denier.

Other major problems revolve around ownership and copyright. ChatGPT is nothing without the information it sucks up.

Interestingly, Musk began allowing the endless banks of Twitter information to be used as a research tool but has now suspended that access.

Presumably the closer ChatGPT moves to offering commercial paid-for services the more information creators will be looking for their cut.

Downstream there is also the additional problem of who owns the copyright in ChatGPT-generated articles and whether freelance journalists own anything to sell on to publications.

Despite understandable scepticism there is a sense that with ChatGP the world of media and information has taken a disturbing lurch forward with unpredictable consequences.

We can only hope that humans can manage to keep control of what might cause a surge of ever more frighteningly intelligent bots.

In the meantime ChatGPT should be set the task of coming with with a more user-friendly name.


Raymond Snoddy is a media consultant, national newspaper columnist and former presenter of NewsWatch on BBC News. He writes for The Media Leader on Wednesdays — read his column here.

Media Jobs