How True: What does a machine-enabled news world mean for journalists and readers?

Reflections from the 2018 GEN Summit - The challenges and ethics of AI integration into journalism

Sponsored content

By Reg Chua, Reuters Executive Editor, Editorial Operations, Data & Innovation

Reg Chua speaking at the GEN Summit

We take a lot of things on faith – that buildings are engineered not to fall down, that planes defy the laws of gravity, or that tomorrow’s weather is likely what the forecast says it is. (Well, maybe not so much that last one.)

But the point is, we take all these things for granted, partly because we have faith in the people who designed and built them, and partly because they’ve been around long enough for us to get comfortable with their track record – or not falling down, of taking off successfully, or being accurate about thunderstorms tomorrow.

And so too, there’s been some measure of faith in the news business – less so now, to be sure, but still a core belief that, whatever the flaws of the news business, we don’t make things up, editors have a sense of what matters to readers, and most mistakes are honest errors made by busy people.

But what happens when we move into a much more machine-enabled news world, where artificial intelligence systems help process data to give journalists insights, where algorithms select what stories you might interested in, and where a glitch in code can magnify an error across thousands of stories? How do we, as an industry, both police ourselves to prevent systematic machine errors as well as help the public navigate this brave new world?

That was one of the topics that came up at a wide-ranging discussion on “The Augmented Newsroom: How AI will impact the journalism we know” at the 2018 Global Editors Network Summit in Lisbon. On stage with Lisa Gibbs of the AP and Mar Gonzalez Franco and Ben Rudolph of Microsoft, we batted around the questions surrounding the integration of AI into journalism, from the cost of technology to how it can help provide better and more news to historically underserved communities to the likely arms race it will fuel with disinformation providers.

All really interesting topics, but the biggest open area was around the ethics of AI in journalism – how much disclosure should there be about the systems that helped power a story or made it more prominent in someone’s news feed; how new forms of corrections need to be developed to deal with algorithmic or coding errors; how humans and machines should interact to write and verify stories; and so on. For example, Reuters’ new Lynx Insight system is focused mostly on assisting journalists rather than on expanding the universe of machine-generated content; the idea is that humans working with machines will yield better outcomes than either working on their own. This isn’t rocket science, but it’s important that as an industry we develop a shared set of best practices that we’ve examined and thought about.

An analogy to this are the shared understandings that photographers and photo editors have about how much color correcting of images and cropping is acceptable, and when a line has been crossed into outright manipulation. To be sure, there are disagreements at the boundaries of the discussion, but by and large professional photojournalists agree on the broad principles and practices.

And so too we’ll have to evolve those rules and practices for using AI in the newsroom. The good news is that a number of smart journalists and academics are already jumping on the topic, and we’ll be aiming to convene discussions about the subject soon.

And then maybe, sooner rather than later, AI in news will fall much more into the buildings-don’t-fall-down-generally category of things that we take for granted.

Brought to you by