Why AI is different now: we’re the ones being trained, not the machines
Opinion: 100% Media 0% Nonsense
Serious media investment and thinking is going into AI for all sorts of use cases. The tech is not particularly new but, critically, our culture is.
I’m unfortunately old enough to remember a time when no-one did online dating, even though we all had the internet and the same primal urges that plague us today.
“Ew, meet someone from the internet? What if they’re a creep? Or worse, what if you’re a creep?” would be the common concerns from a simpler mid-noughties time.
But now you’re a weirdo if you don’t find love through your phone. We basically order romance in the same way that we order light bulbs or chicken chow mein: a magical app which knows what we like (or who we like) and, depending on your phone software and how diligent you are about privacy setting, a whole lot more personal stuff than you may realise…
Yes, online dating became more convenient and intimate as the the internet became something we did on the train and in the bathroom, not just on a big desk with a big screen. But what has really changed is the culture. Maybe it was the movie You’ve Got Mail or decades of successful app marketing fuelled by cheap venture-capital money, but somewhere along the line it became okay to swipe right and see what happens.
Culture is now ripe for AI
This all seems familiar when having countless (and often breathless) conversations with media owners and media agencies about how AI is going to revolutionise everything.
One UK media agency group told me last week that it is actively looking at generative AI to transform how they create “media insights” for marketing activity. One executive told tales of how planners would be able to use a platform like ChatGPT to mine thousands of academic studies or relevant case studies in order to generate the central drivers of media strategy.
The Media Leader‘s new tech correspondent, Ahmed El Kady, reported on Friday that NBC’s streaming platform Peacock is developing AI tools to manipulate old content for product placement purposes. This is essentially building on what was popularly called “deepfakes” in recent years — the technology is now so good that it’s likely possible to make it look like the characters from Friends are wearing clothes made by whatever sporting apparel brand wants to pay for the privilege.
And, of course, AI will replace journalists because all we do is lightly rewrite basic information and any old dimwit machine can do it. (Well, yes, some of us do that far too often. Especially in the advertising trade press. They know who they are.)
What has supercharged this collective awakening over use cases of AI is ChatGPT, the AI chatbot launched by the company OpenAI last November. OpenAI is now valued at $29bn after gaining a huge amount of attention for its ability to produce detailed and articulate responses to a great number of written tasks, from academic essay writing to joke creation.
Once again, the tech is not really groundbreaking. As the ad fraud researcher Dr Augustine Fou has written recently, many of the precursors to the algorithms behind ChatGPT have been used for years in the creation of fake websites.
Incidentally, expect there to be even more trash clogging up the internet thanks to AI. An alarming report from Digiday earlier this month explained how some specialist publishers are trying to essentially game SEO by producing loads of low-value websites full of keyword-laden drivel. Sooner or later, Google and Microsoft should consider whether their algorithms will account for web pages being mass produced by AI. But then I won’t hold my breath, given the world’s biggest developers of AI tech are… Google and Microsoft.
Why AI really matters now
Rather than the tech reaching a landmark moment, it’s our cultural reaction to AI that has changed. We’ve reached a cultural point in media where we’ve not only accepted AI, but we’ve become active participants in training AI.
This is important. You will, at some point, have been shown an example of how ChatGPT has done something rubbish. For example, I asked ChatGPT to run a biography of two of our reporters and it said Ella Sagar and Jack Benjamin had both won Pulitzer Prizes. As talented as they are, they have won no such award (yet).
Share an anecdote like this on LinkedIn and you’re likely to collect a deluge of responses from clever people advising you how to refine what you ask ChatGPT, or that it’s the data set that is the issue, or that Microsoft’s is better or that Meta’s is better.
The point is that I’m writing about it, you’re thinking about it, and serious money is going into developing it with large swaths of advertising and media.
It’s not so much that we’re training AI to become more effective, so it can make us all richer, or smarter, or render us useless, or whatever the actual point of this technology is meant to be.
It’s that we are now being trained by AI to use AI more effectively. We, the media professionals, are the ones being taught how to think about AI philosophically and how to use AI practically.
I’m sure our old friends, the dating apps, are busily building profiles for AI that can solicit humans for “companionships”, now that they’ve cornered the market in human-to-human matchmaking.
Could a human and an artificial intelligence live happily ever after? Even that’s not a new idea…
Omar Oakes is editor of The Media Leader. 100% Media 0% Nonsense is a weekly column about the state of media and advertising. Make sure you sign up to our our daily newsletter to get this column in your inbox every Monday.