The content AI wave is coming – what will the ethics behind it look like?
The Media Leader Interview
Nick Duncan, founder of ContentBot, speaks to The Media Leader about the exciting growth of content AI—and its ethical implications.
Handwriting. Printing presses. Typewriters. Computers.
As writing technology evolves, so too does our efficiency.
Content AI, machine-powered, humanlike copywriters, have already begun to receive interest in the media industry. Publications, including for The Associated Press, have been using AI to generate basic sports and business content over the past several years.
In more recent months, investors have poured $10m worth of financing into copywriting AI programs like Copysmith and Copy.ai to help expand its usage to marketers. Given that the AI in media and entertainment market value is expected to reach $99.48bn by 2030, such investments are likely only the beginning.
But with the rush to utilize this new tool comes the need to be vigilant over its potential ethical implications.
In a wide-ranging interview, The Media Leader spoke to the first company leader to push for ‘ethical’ use of AI content writers, whose approach to content moderation was considered so unpalatable by some that they left to join other, more carefree platforms.
‘You have to edit that thing’
For Nick Duncan, the founder of ContentBot, a content AI that helps startups, marketers, “SEO peeps”, mom and pop companies, and bloggers create written content out of just a few short inputs, AI is just the next step in technological evolution that will help make writers’ jobs easier.
Duncan, speaking to The Media Leader over Zoom from Johannesburg, South Africa, described just how much AI technology has progressed.
“We can get it to almost write good content. […] [T]hrow it together with a couple of other tools and we actually got quite a solid product that can do a lot of your creative work for you.”
Duncan, who markets his products to small-to-medium sized businesses and their leaders, worked for years in digital content marketing and search engine optimization.
A self-described former “jack-of-all-trades in digital marketing”, he knows just how useful AI can be in solving what he calls the “blank page problem” of struggling to generate ideas for good copy.
“It can write your entire article, but that’s not suggested,” said Duncan, noting that AI isn’t a true substitute for a person behind the computer. “We can give you blog topic ideas, we can give you blog intros, we can give you a blog outline. We can create paragraphs for you and lists for you—whatever the case.
“But it’s really a game about you putting it together in a way that makes sense for your readers at the end of the day.”
ContentBot also has a prototype AI press release generator that can draft out 300 words based on basic inputs. “[It] will make up quotes and will say, This product’s been launched and whatever context you give it, it’ll actually write it in a very press release type way.”
“But you have to edit that thing”, Duncan warned.
As with any AI, the higher quality inputs it receives, the better off the result ends up. But given that it’s an emerging technology, many marketers or creatives have not yet been adequately trained on how best to utilize AI in their work flow.
“We also have people that sit there and they put ‘casino chips’ into the input and they expect it to talk about something else—they don’t realize you need to kind of lead the AI in the right direction. And they get angry with the AI because it hasn’t done what it should do.
“I think there’s a lot of education that needs to happen in the market still.”
‘We make sure that things are solid as can be’
As Duncan says, AI can have “terrible effects” despite its incredible usefulness. Spam is a legitimate concern, and one could easily imagine a world where bad actors misuse AI to quickly spread massive amounts of disinformation, causing a further decrease in trust in online media.
ContentBot is attempting to lead on the ethical use of AI. Though it is just a team of six, the company focuses considerably on moderating content.
Unless the writer can prove that they have a degree in a field or are otherwise an expert that can adequately fact check and edit content the AI has produced, users are disallowed from using the AI to write medical, financial, or legal content, among other topic areas.
“[The AI] will say there’s a cure for diabetes—just eat seven teaspoons of sugar a day. It’ll make stuff up, and that’s a problem,” says Duncan.
He describes how people have “really tried” to misuse their AI, but by developing a responsibility program with OpenAI, they have created “a sort of honor system” on their own content bot. As soon as anyone types anything into the AI’s input field, it automatically gets categorized using a separate AI tool. Of over a thousand different tags, around 200 or so are automatically flagged (e.g., if it discusses violence, death, health topics, etc.). If flagged, a warning email is sent automatically to the user to let them know they are not allowed to use the AI for such content, and if the user continues to try anyway, the account is suspended.
ContentBot does therefore collect data on its users, but the sale of such data is not a part of ContentBot’s business plan (it makes money through subscriptions, which it tailors to different global economies in an attempt at price equity) and the only individuals with access to granular user data are Duncan himself and ContentBot’s chief technology officer.
“We try to incorporate GDPR principles throughout the company,” says Duncan. “We had to as a result of our other products, so we kind of just do it naturally. Security is very important for us, we’ve got independent security researchers working with us as well, finding loopholes in the system and all that, so we make sure that things are as solid as can be.”
‘Reactively dealing with these issues as they come up’
Though content moderation is a big job—just ask any major social media company—the small team at ContentBot is currently able to keep up with capacity because it’s relatively rare that its clientele is asking to be enabled for flagged content. But one could imagine that as AI continues to develop and becomes a more commonly used tool, that such issues of content moderation will become overwhelmingly salient.
Such concerns have been highlighted in recent weeks after Meta launched its new AI chatbot, BlenderBot3, which, upon testing by the general public, began spouting far-Right and anti-Semitic conspiracy theories.
AI image generators may also pose considerable risks to media trust. Though DALL-E, OpenAI’s text-to-image AI system, currently disallows users from creating fake images of public figures, among other potentially toxic content, a new startup, StabilityAI, is creating a similar AI without any moderation for how it is used.
“I think the technology has progressed so rapidly that we’re reactively dealing with these issues as they come up, because it’s sort of things that you don’t really think about when creating the product,” said Duncan. “You don’t think this AI that I’m creating is going to become racist. But the unfortunate truth is it’s racist because it’s built on the data that it has, which inherently has racist sections in it.”
Duncan advocates for the creation of an ethical standard or broader oversight council to create and enforce industry-wide ethical practices to ensure AI is not misused as it becomes more capable and popular.
Questions it could address include: should all online posts created with the aid of content AI be flagged, similar to sponsored content? At what point should they be flagged? Who should be allowed to use AI to create certain content? What penalties should exist for using AI to spread disinformation or misinformation?
The lack of industry-wide standards is concerning to Duncan (pictured, above).
He notes that while OpenAI is “great at ensuring content policy adherence”, other newly developed AI platforms aren’t.
“There’s quite a few AI platforms like OpenAI coming into existence at this point, but they’re still brand new, they still have to figure out what OpenAI figured out in terms of mass disinformation and misinformation.”
While it’s almost impossible to predict the long-term development of such cutting-edge technology, Duncan believes “there needs to be some focus on it in the sense of, ‘Hey, everyone, let’s start looking at this now so that in a year’s time we’re not sitting with a major issue’. And we’re not sitting with AI content writers that are creating millions of blog posts and destroying the blogging industry overnight”.
“I honestly think we’re the only ones worried about what people are doing in AI, though I can’t speak for other companies,” continues Duncan.
“We were the first that enforced the ethical use of content writing in the AI space, making sure you don’t write about certain topics. And that actually pissed a lot of people off that were using our platform, and they left to go to other competitors. So, whether or not [said competitors] are doing [content moderation] now, I don’t know. […] I’m sure a lot of them are doing it and I’m sure a lot of them aren’t.”
‘Are we using it to augment or are we using it to just take over?’
How people and companies decide to use content AI could have significant effects on the marketing and content creation job markets.
While Duncan does not believe AI will ever completely replace content marketers and that a human will always need to be “in the loop to check the style, the tone, the direction of the article”, much of the work content AI replaces is currently done by interns and junior content marketers. More efficiency, while better for business, can lead to redundancies. The same goes for factory assembly lines and fast food restaurants incorporating machines into their workflow.
Of course, content AI is far from being an all-encompassing solution. Currently, it is limited by the fact that it draws from content on the internet at a certain point in time, meaning that anything newly released, like “some weird cryptocurrency”, as Duncan describes, or other breaking news, would not show up via the AI and therefore requires personal human input.
But having proactive conversations today over best practices is important to Duncan.
“[A]re we using it to augment or are we using it to just take over?” he asks. “I think if we’re using it to just take over that’s a problem. That should be monitored and standardized and watched.”
He adds that “if you’re using AI to just write rubbish content for you, you should be penalized” by the likes of Google, and that he thinks within the next year such penalization policies will be developed for those that misuse content AI.
‘Scary and exciting at the same time’
For ContentBot and other content AI companies that are sprouting up to partake in the AI gold rush, the growth potential is immense.
Duncan describes that ContentBot is just one of “thousands” of AI companies out there. “This is the type of product that is easy to create,” he says. “It’s difficult to perfect, but it’s easy to create”.
But according to Duncan, around 90% of those companies aren’t concerned with the ethical use of their inventions.
“They could have people create fake news on celebrities, or fake reviews, or mass disinformation. Who knows what’s going on out there at the moment—it’s quite a scary landscape.”
One of ContentBot’s main goals for the next year: education. Even without organized development of industry-wide oversight and standards, Duncan says: “We have to educate the market on how to use it effectively and ethically”.
“[W]e’re only really just scratching the surface of the market at this point. […] But I think it’s gonna come quicker than I think most people think. And that’s scary and exciting at the same time.”