What ONA 2026 Taught Me About AI, Newsrooms, and the Leadership Gap

I went to my first Online News Association conference four years ago, and I came back energized. Conferences can make you feel this way — like the ideas alone are enough to change something.

Then life took over. Deadlines and the pace of a daily newsroom became overwhelming. ONA became something I kept meaning to get back to.

This year I finally did. What I discovered at ONA in Chicago, not surprisingly, is the conversation has dramatically changed — from whether to use AI to something more difficult: how to use it without breaking the human systems that make journalism work.

That’s the right question. And it’s overdue.

Four years ago, AI in the newsroom was peripheral — something experimental, easy to ignore. This year, it was the center of gravity. Every room, every hallway, every lunch. Not “should we use AI” — that debate is over. The harder question is how to use it without losing what makes journalism worth doing in the first place.

From Evangelism to Pragmatism

What stood out most wasn’t any single tool or framework, though there were plenty worth bringing home. It was the tone.

There’s less evangelism now and more pragmatism. People are talking about AI the way they talk about any other production tool — what does it actually do well, where does it fall down, and who’s responsible when it gets something wrong?

That shift matters. AI isn’t the hard part anymore. Alignment is.

The 80/20 Reality

The Associated Press put a number on something many newsrooms are starting to feel: 80% of a process can be automated, but at least 20% has to remain human.

Editing. Fact-checking. Judgment.

That ratio isn’t just technical — it’s a policy position. And having something that concrete to hand to a skeptical newsroom is more useful than any demo.

The Most Useful Work Isn’t Flashy

Some of the best ideas came from smaller organizations doing unglamorous work.

City Bureau is using generative AI to synthesize civic meeting notes. Sahan Journal has built custom GPTs to create personalized media kits for sales calls — not an editorial use case, a revenue one, and it works.

And then there was El Vocero de Puerto Rico’s cautionary story about an AI agent they’d named Victor. Over time, Victor started producing bad data and sending emails nobody had asked for.

Victor got fired.

The lesson wasn’t that AI fails. It’s that you have to keep managing it long after launch — which is not how most organizations treat a tool once it’s deployed.

Culture Is the Bottleneck

The sessions on culture and leadership hit closest to home.

CNN talked about putting leaders visibly at the front of AI adoption — not as evangelists, but as practitioners. The fastest way to reduce fear in a newsroom is to show that the people above you are figuring it out too, not just mandating it from a distance.

Reuters has reorganized around cross-functional squads: editorial, product, engineering, and data science working together on specific problems instead of handing work off in sequence.

That’s not a workflow tweak. It’s a structural change. And it requires a level of trust most organizations don’t build quickly.

The Gap That’s Already Here

The tools are here. They’re getting better, quickly.

What’s lagging is the culture and leadership needed to use them well — and that’s not a technology problem. It’s a people problem. Which means it’s slower, harder, and ultimately more important than any product release.

One number from a Thomson Reuters Foundation study has stayed with me: 81% of journalists in the Global South already use AI daily or weekly. Only 13% operate under any formal newsroom policy.

That’s not a regional anomaly. It’s a preview of what happens when technology outruns leadership.

And right now, it is.

Leave a comment