1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

Digital strategist Nic Newman talks AI and journalism

ChatGPT, MidJourney and co. are poised to transform content production. We spoke to Nic Newman from the Reuters Institute about automation, virtual reporters and risks associated with artificial intelligence.

Illustration MidJourney  l Newsoom, Nachrichtenredaktion 2050 l Generative KI und Journalismus
An illustration created using Midjourney, an independent AI art generator, which turns text-based prompts into imagesImage: MidJourney

Nic Newman is no stranger to managing technological disruption at media companies. The digital strategist and former journalist played a key role in shaping the BBC's internet services for more than a decade. He's also been the lead author for 11 years running of the Digital News Report, which is considered the most comprehensive annual survey on news consumption worldwide.

Currently, the 62-year-old is a Senior Research Associate at the Reuters Institute for the Study of Journalism at Oxford University. In this interview, Newman shares his perspective on what the rise of generative AI products  technologies that generate words, images and other media on their own  means for journalism.

He also highlights how media companies can build trust with their audiences around artificial intelligence, and why digital twins can supercharge news personalization.

Reuters Institute's Nic Newman
Nic Newman believes that journalism on the whole stands to benefit from AI Image: Reuters Institute

DW: In a recent report at the Reuters Institute, you said that generative AI "raises existential questions but also opens up a range of new possibilities" for journalism. What are your observations on and expectations of the impact of AI-powered chatbots and art generators?

Newman: First of all, all of this technology has been around for a while. Some journalists were experimenting sort of behind the scenes, but it didn't really see the light of day. What's different now is that these tools are suddenly useful and freely available, so you're starting to see real specific use cases for journalism. That has really led to this wow effect.

The technology has gotten a lot better, one example of which is ChatGPT. And it will keep improving: GPT-4, the next version of OpenAI's large language models, is rumored to already underpin Microsoft's revamped Bing search engine and Edge web browser. These interfaces enable chatbots to have real useful applications.

The "existential questions" part really is about the fact that automation is coming. A whole load of things journalists currently do are repetitive processes that can be automated. This means we're going to have to rethink what it means to be a journalist. Because up until now, a part of journalism has been, for instance, transcribing interviews like this one. But now as we talk, Google is transcribing away using artificial intelligence. We might still need new skills to work out where it's going to make mistakes so we can do overrides, but broadly, AI will take care of it.

In terms of possibilities, there are just so many. One very important application is this massive challenge of fragmented audiences with young and old people wanting different things as well as a plethora of available formats. AI offers journalists the possibility to not only create your story, but also to version it much more cheaply and efficiently than before. Moreover, it can make the content more relevant, personal and engaging to different people. So this breakthrough in artificial intelligence will really make good on the promise of personalized news, which we've been talking about for 20 years.

Journalists and bots working hand-in-hand

You mentioned spotting mistakes made by AIs. One of the widely reported early content experiments with AI chatbots was US online media CNET having an AI write at least 75 articles that had all sorts of errors, explaining compound interest in an incorrect way, for example. What can we learn from this and from other noteworthy AI experiments in journalism?

The CNET example is a very good one because it shows the possibilities and the direction of travel. You can have an AI add context to stories, or – like CNET did – make it create how-tos to generate traffic. ChatGPT is basically able to do that incredibly well, but the difficulty is that it can also look plausibly wrong, which is what happened in the case of CNET. Now, that's going to get better. The other key thing is that you will be able to train these models on reliable content. ChatGPT is currently trained on thousands of sources, some of which are right, some of which are wrong.

Now, there's a lot of hype around ChatGPT, and rightfully so, but it's more like a showcase. In the future, I expect OpenAI, Google and other players to license their chatbots to newsrooms, which would pay for the services and train the AIs with their own data to make sure the chatbots are reliable and fit their own needs.

One of the problems journalists face right now is that a lot of effort and time still go into providing context or background for pieces of journalism. In the future, the system will do that for you, perhaps with the author having a manual override option. Examples are context boxes or summarization tools, such as bullet points at the top of articles. They can be quickly auto-generated and fact-checked by the author.

How does this lead to personalized content?

AI can help reporters version their stories by working out who they want to show the bullet points or the context boxes to. So people who generally click on articles with context boxes will see more of them, and vice versa.

The end game is to produce blocks of content with a bigger range to suit different audience needs. This is where AI picture generators like OpenAI's DALL-E, MidJourney and others will be valuable assets, too.

What are some innovative ways these AI image generators are already being used and will in the future be used in journalism?

US start-up Semafor is a good example. Their video department does a lot of experimentation and innovating. One of their really interesting projects is a series called "Witness," for which they interviewed victims of Russia's invasion of Ukraine. They then illustrated the interviews in the absence of real footage by using eyewitness accounts as text prompts for an AI image creator in combination with the style of an artist. The result is this really interesting film with quite fuzzy and cartoony images.

Staying on the visual side of generative AI, you're going to see AI-generated illustrations all over the place this year. We're already seeing them replace stock pictures from Unsplash or other providers at the top of articles.

In terms of audio-related applications, taking a text article and turning it into an audio article, or turning an audio article into different languages is really picking up steam. One use case is cloning journalists' voices: You can train an AI with the voice of your audience's favorite presenter and have the synthetic voice read text articles, for example. A digital twin would take this one step further: A real correspondent will have a virtual version that can answer questions via chatbots or virtual assistants like Alexa. And the real journalists will still do the most important work, such as anchoring TV news.

Another huge area is using artificial intelligence for inspiration. If I've got to do an interview but haven't got time to research it, I can ask my favorite chatbot for a few tips about what would make interesting questions. Just like you (used to) do with a human research assistant. Another job AIs can do amazingly well is the work of sub-editors, or copy editors. When you put in a style guide, it will basically spot all the wrong commas, grammar mistakes and bad syntax. In the future, the role of the sub-editor will disappear and be more about managing text prompts. Moreover, AIs will do search engine optimization far better than any human SEO experts.

Obviously, automation raises the question of how you signal that, and how to be transparent about that. What's more, anybody else can train someone to sound like the digital twin who isn't, and they can spout rubbish.

Bigger roles for both AI and humans

This brings us to this fear of unverified and manipulative content flooding the internet. What are the implications of massive amounts of synthetic media for fact-checking and for trust in journalism?

It's going to get even harder for journalists to spot deep fakes and all the rest of it. Of course, AI can also be used to spot fakes, so it's this sort of battle where AI is going to be employed to both create and discover junk content. I think debunking will become much more important in the journalistic mix within newsrooms and news agencies. More money needs to go into it, and it will become much more augmented by AI tools to help fact-checkers do their jobs.

Secondly, I think platforms have got an even bigger problem. They need to bite the bullet and identify and promote more reliable sources, which requires getting much better at spotting and downranking falsehoods, etc. with their algorithms, as well as dedicating more staff and funding to debunking.

Thirdly, news organizations need to focus more on building personal relationships. Part of that is making your content more relevant with AI, but a lot of it is actually about helping users build deeper relationships with your brand by making your human talent stand out more. If we can build more direct connections between journalists and audiences, we can restore some of the trust that has been lost over the past decade in many countries.

Journalists are hard to replace

What role does transparency play in restoring and building trust about AI-generated content? Will we engage more in explaining the processes behind content creation_

It's going to genuinely be challenging, because every time a new technology comes along, it can take a long time to become literate in it, both on the production side and the consumer side. And in the meantime, a lot of bad stuff happens.

Recognizing the change and working out what policy you want to use to address it should come first. Two of the next things you need to do are obviously transparent labeling and understanding the legal situation.

You also need to train and educate your reporters to get their buy-in. There is that deep worry that AI will replace people's jobs, but journalism is one of those professions AI will have a hard time replacing, partly because AIs don't know what just happened or what is about to happen.

Journalists need to focus more on the things machines cannot do. If there's breaking news in Ukraine, for example, AIs won't be of much help. But they will be great on the background stuff: "ChatGPT, how did we get here? Give me a timeline!"

That's the point: It'll give reporters more time to break news and analyze it in real time. We just need to make sure the chatbots are trained on the appropriate materials with the right oversight.