WHQR's Sunday Edition is a free weekly newsletter delivered every Sunday morning. You can sign up for Sunday Edition here.
Over the last year, we’ve had a lot of conversations in our newsroom about artificial intelligence — not just as a news story, but as part of the changing landscape of the news industry itself. There’s a lot to say here, much of it said better by experts than by me, but I wanted to give you a sense of the discussions we’ve been having and hopefully hear your thoughts on the issue.
A quick note: I’m not talking about general AI – a true artificial consciousness, which I think is a long way off (but please tell me if you disagree!) – but machine learning software, including large language models like ChatGPT and image-creation programs like DALL-E. These programs typically take simple prompts – like, "write an article explaining inflation," or "create an image of a scuba-diving panda bear" – and produce a written response or image.
A top concern here is that AI will be used to replace the work of human beings. The incentive is clear: recruiting and retaining talented creative people is a major cost in most industries. And that’s the nasty undercurrent to the excitement around AI – while we’re dreaming out loud about the future, we’re also talking about destroying people’s livelihoods. That issue played a major role in last year’s Hollywood strike, but it's a concern I hear from all types of creative people: artists, writers, performers, programmers, and, yes, journalists.
Right now, a lot of what AI produces is glitchy. It confidently presents wrong answers and distorted images (anthropomorphized in the term “hallucinations”). Because of that, many people are worried about what it might be able to do in the future.
But others – including me – are worried about how it’s already being used, not just for spammy blog posts and social media bots, but in ostensibly reputable outlets.
Last year, Futurism and The Verge reported on what appeared to be less-than-transparent use of AI by Sports Illustrated and Reviewed (a product-review site reportedly shut down just this week by Gannett, its parent company). In both cases, a third-party content producer was involved.
For many journalists I talk to, this is the worst-case situation: a profit-driven push by media companies to replace journalists with outsourced chatbots.
But there’s another, messier scenario: young reporters – facing crushing deadlines and story quotas, and lacking support and mentorship from editors – turning to AI to help keep up with their employer’s demand for fresh ‘content.’
Many editors will catch AI’s tell-tale glitches and off-kilter delivery, and reporters will probably face consequences on par with those for fabricating and plagiarizing (since AI is, in truth, a bit of both). But some editors – either overworked to distraction, or turning a blind eye in desperation – won’t catch the AI copy before it gets published. In terms of accuracy, accountability, and public trust, the risks are significant.
NPR’s guidance on generative AI, published late last year, touches on both the top-down and bottom-up scenarios.
The guidance leads with a promise: “What audiences see and hear from NPR has always been, and always will be, the product of human beings.”
I believe NPR means this earnestly. But it's also worth noting that SAG-AFTRA, the union representing Hollywood actors, NPR employees, and the staff of numerous local public media stations (including WFAE and WHQR here in North Carolina), has pushed for robust contractual protections to ensure NPR lives up to that promise. All of which is to say, I don’t foresee Scott Simon getting surreptitiously swapped out for a synthetic performance.
NPR’s guidance also puts a lot of personal responsibility on journalists and editors who want to use AI in their work – and holds them accountable for accurate, impartial reporting that’s free of plagiarism.
But it’s worth noting that NPR, which prides itself on being “a leader in finding creative uses for new technology,” leaves the door open to a lot of potential AI uses – including machine learning tools that don’t even exist yet.
Ok. So, what could AI be used for?
Well, at the moment our main AI tool here in the WHQR newsroom is transcription software called Otter.ai. Because one of us always goes behind Otter and double-checks the quotes we use in our written pieces, I can attest that the program has gotten better and better at transcribing over my four years at WHQR (although it still struggles comically with a true southern drawl).
It’s a great tool, and it’s saved our news team collectively hundreds of hours – but it’s not perfect. Its newest feature, launched over the last year, provides a synopsis of a recorded conversation. As an example, I read this piece aloud and fed a recording of it to Otter. Here’s the unedited synopsis it produced:
The conversation centers on the impact of artificial intelligence (AI) on the news industry. Speaker 1 discusses the potential for AI to replace human jobs, citing examples like Sports Illustrated and Gannett's use of AI for content generation. Concerns include AI's current glitches and its potential misuse. NPR's ethical guidelines emphasize human-generated content, but acknowledge the potential for AI tools like transcription software. The speaker balances skepticism with acknowledgment of AI's benefits, such as time-saving transcription tools. They stress the importance of maintaining journalistic integrity and humanity, while acknowledging the financial constraints faced by nonprofit news organizations.
So, I think it does a pretty decent job when handling a shorter monolog. But give it a longer, round-table discussion and it will hallucinate names, confuse ideas, and erase nuance.
Would I use similar software to generate reader-facing content, as Gannett has done with its AI-generated bullet points, used to summarize articles? Candidly, I don’t love the idea. But Gannett journalists reportedly review the bullet points before publication, and I admit it could be a handy tool – if the journalists are given the time and bandwidth to use it properly. In a rush to push out content, it could also be a weak point in the quality control chain.
Here’s another example: there’s AI tech being pioneered that could simultaneously watch dozens of local government meetings online, create searchable summaries, and provide them as reports to journalists.
Right now, we cover what we can but we also rely heavily on the public to send tips to our newsroom. So, ethically, maybe there is a place for an AI algorithm flagging keywords and phrases and sounding the alarm when it finds them – alongside the loyal listener who sends us an email saying “hey, town council did something odd last week, you should check it out.”
As a tool, AI like this shows potential – and as a substitute for a reporter, it gives me pause. And there are probably a lot of tools out there that fall into this category and, in the coming years, I think we’ll see a lot more.
Now, I have voiced plenty of concerns about AI – in this newsletter, on social media, and elsewhere – but I’m not anti-technology. I love, and rely on, technology as much as anyone, maybe more, trust me. I’m certainly not a Luddite, in the modern sense of the word.
It’s worth remembering, though, that the original Luddites who fought automation in the English textile factories of the early 19th century weren’t necessarily against innovation or progress: they opposed lower wages and reduced quality. Over two centuries later, I think those are still valid concerns.
At the same time, we’re not a publicly traded company. Like NPR, we’re a non-profit news organization. We do have a fiscal responsibility, but it’s not to Wall Street or investors — it’s to you, to make sure your financial support goes as far as it can in pursuit of our journalistic mission.
And the fact is, while I have amazing colleagues who work very hard, and WHQR has incredibly supportive listeners and readers, we don’t have the resources to cover every important story. And that’s not just WHQR. That’s stations all across the country – and NPR itself.
So, that’s where we are. We have to be open to ways to do more with less – but not at the expense of quality (or, frankly, our humanity).
I can’t tell you for sure what comes next. AI might be a bubble that bursts. It might spawn Skynet, and we’ll be running from cybernetic killing machines with Austrian accents, longing for the days when AI still failed the Turing test.
What I suspect is that we’ll see many media outlets incorporate AI. Some will go whole-hog in feckless attempts to pad their bottom lines. Others will do it feeling they have no other choice if they want to stay afloat. Maybe there will be boutique outlets that swear off machine learning completely, producing artisanal, free-range, AI-free reporting. And, hopefully, there will also be a solid number of outlets that use AI – but cautiously, responsibly, and above all transparently.
Like NPR, I’m in favor of “keeping humans in control,” and that includes you, who deserve to have your thoughts, hopes, and concerns about AI heard. Drop me a line. I promise you won’t get a chatbot.