|

‘The greatest heist of IP the world has ever seen’: publishers push for AI regulation

‘The greatest heist of IP the world has ever seen’: publishers push for AI regulation

“We’re all trying to catch up to the greatest heist of intellectual property the world has ever seen.”

At a panel during the Labour Party Conference in Liverpool on Tuesday, Matt Rogerson, Guardian News and Media’s director of public policy, expressed dismay over how publishers have been treated by generative AI developers, and stressed the need for government and regulators to take action to support publishers and protect audiences from misinformation.

“This technology is not reliable,” Rogerson said. “Generative AI is a misbranding of the technology. It is not intelligent. It is a tool that extracts, and crawls lots of information, usually from journalists and without a commercial license to do so and then regurgitates that information in response to a query. It is not intelligent. It is not a journalist. It is not a human.”

In recent weeks, a number of publishers, including the BBC, have moved to restrict generative AI companies’ access to their webpages, disallowing them from crawling content to learn from and use.

In a blog post on 5 October, BBC director of nations Rhodri Talfan Davies explained, “We do not believe the current ‘scraping’ of BBC data without our permission in order to train Gen AI models is in the public interest and we want to agree [to] a more structured and sustainable approach with technology companies.”

The BBC joined The New York Times, CNN, Reuters, and others in preventing generative AI web crawlers from accessing its copyrighted material. The decision came as ChatGPT developer OpenAI announced its model would begin accessing up-to-date web pages and articles (ChatGPT was previously restricted to crawling for data from before September 2021).

Rogerson agreed that publishers, as the owners of the IP, should have the right to decide who uses their content. He also implored lawmakers to create stricter copyright laws and for generative AI companies to provide “full transparency around the downsides of AI,” including clarity that they do not produce journalism and warnings to let users know to check the information they receive.

“Everything our journalists have written in the history of the digital publication has been used to train these tools without permission,” added Rogerson. “That is quite a scary phenomenon.”

The Labour Party Conference panel, which was hosted by news media trade body the News Media Association, also featured Daily Mirror political editor John Stevens, Labour MP and Shadow Department for Science, Industry and Technology Minister Alex Davies-Jones, Tony Blair Institute executive director of policy Sam Sharps, disinformation combatting tech startup Logically’s head of government affairs Henry Parker, and techUK head of policy, technology and innovation Laura Foster.

Sharps drew attention to the concern over the level of “convincing fakery” that generative AI models can create, but warned “the regulatory tools available to us in addressing these issues are complex and knotty to work through.”

From the standpoint of news outlets, AI thus poses a threat not just to their intellectual property, but also to their capacity to quickly fact check information.

“Journalists have limited resources,” admitted Rogerson. “A lot of journalistic organisations are constrained in their resources nowadays [for] various reasons – changing business models and media habits, particularly acute at a local level.

“There are 650 constituencies, so 650 battlegrounds for AI to be misused. There aren’t 650 fact-checkers that are going to be able to address those small cases of disinformation.”

Ministers have warned that a “smoke alarm” is needed to head off the range of threats posed by AI, which also include concerns over the misuse of AI systems to create bioweapons or engineer cyberattacks. Downing Street is currently working on an agenda for the UK’s AI safety summit, which will be held on the 1st and 2nd November at Bletchley Park.

Rishi Sunak’s advisers are trying to thrash out an agreement among world leaders on a statement warning about the risks of artificial intelligence as they finalise the agenda for the AI safety summit next month.

Davies-Jones offered that lawmakers should move to protect news publishers not just from AI, but also from social media platforms through “strong competition laws and regulations and making sure the platforms are held accountable for what is being spread algorithmically on their platforms.”

She added that fair remuneration for trusted news publishers should be prioritised to, especially as AI, in her own words, poses a direct threat to global democracy.

“There are 44 elections next year happening around the world,” said Davies-Jones. “It is going to be the biggest year for global democracy that there has been for generations, so this does pose a huge threat, not just in the UK, but around the world.”

Our surprise guest at The Future of Media: an elephant

Media Jobs