AI Ethics in Journalism: Beyond Human Baseline

The “human baseline” approach posits that the ethical success of artificial intelligence is achieved when its decision-making mirrors or marginally improves upon that of a competent human.  In the classic “trolley problem,” this implies that if an AI can consistently choose the “lesser of two evils” with more precision than a panicked human, it has cleared the ethical bar.

However, as the media and journalism industry increasingly integrates generative AI and automated editorial systems, it is becoming clear that a “slightly better than human” standard is insufficient. In the context of information dissemination, a human-level baseline for AI is not a gold standard; it is a liability.

While comparing AI to the human baseline in moral dilemmas reveals the machine’s capacity for consistency, it fails to account for the unique accountability required in journalism.  

Because audiences in 2026 are caught in a “breaking verification” crisis where trust is the ultimate currency, an AI that is merely “slightly better” than a biased human is ethically insufficient. To be truly ethical, AI in media must move beyond mimicking human choice to provide a level of transparency and evidentiary rigor that transcends a journalist’s capability.

Our newsrooms are facing a speed-versus-verification dilemma.   The human baseline for a journalist is breaking the story vs. being 100% accurate.   AI’s logic is fundamentally different.   AI shifts control from individual journalists to automated systems optimized for engagement and scalability.   Therefore, an AI that performs ‘slightly better’ than a journalist at producing content quickly may be ethically inferior if its underlying logic lacks the transparency and evidentiary rigor that defines journalistic integrity.

Because so much information is published in many ways across many platforms, audiences are having a difficult time distinguishing fact from fiction. 

“‘Breaking verification’ will replace ‘breaking news’ in 2026, and trust will decide who survives,” according to Vinay Sarawagi, co-founder and CEO of The Media GCC.

Audiences need to see evidence and sources to back up what they see online, because seeing is no longer believing.   If AI only does as well as humans at spotting fakes, it’s not enough. To solve the trust crisis, the AI must be exponentially better at citing sources.

In 2005, Wallach and Allen argued that the principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. They distinguish between operational morality, in which an AI simply follows pre-programmed human safety rules, and functional morality, in which a system can independently navigate moral dilemmas.  In journalism, an AI that merely mirrors an editor’s baseline choices is operating within a limited framework.   If the media is to serve the public’s best interests, a journalist AI must move toward a functional morality that transcends basic human instinct and provides the transparency and accountability the public expects.

From a strategic standpoint, “slightly better” is a recipe for disaster.   If AI-generated content results in a libel suit or negatively impacts a company’s stock price, the defense that AI is slightly more accurate than an average human is a losing argument.  As the media shifts into what is being termed the ‘Answer Economy’, the traditional value proposition of a newsroom is being disrupted. When AI models synthesize reports into a single summary, the value of a news organization is no longer just the ‘answer’ or the scoop itself, but the auditable trail of evidence that allows that answer to be verified (Seo Ai Club, 2026). If an AI only meets the human baseline for producing a plausible-sounding summary without providing this rigorous, machine-readable proof of its sources, it fails to meet the ethical demands of a 2026 audience.

Note: This is an essay originally written for a course on AI and business strategy at Johns Hopkins University.

References

Wallach, Wendell and Allen, Colin. “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches.” Ethics and Information Technology volume 7, no. issue 3 (September 2005): 149-155. https://link.springer.com/article/10.1007/s10676-006-0004-4.

Li, Haoran et al. “Artificial Intelligence and Journalistic Ethics: A Comparative Analysis.” Journal of Journalism and Media volume 6, no. issue 3 (August 2025): 105. https://www.mdpi.com/2673-5172/6/3/105.

Mee, S. et al. “Moral judgments of human vs. AI agents in moral dilemmas.” Scientific Reports volume 13, no. issue 1 (February 2023). https://pmc.ncbi.nlm.nih.gov/articles/PMC9951994/.

Simon, Felix.How AI reshapes editorial authority in journalism.” Digital Content Next (June 2025)

Reuters Institute.How will AI reshape the news in 2026? Forecasts by 17 experts around the world.” Reuters Institute for the Study of Journalism (January 2025)

Seo Ai Club.The Answer Economy: A Comprehensive Analysis of Answer Engine Optimization Tracking Software and Strategic Market Leadership.” Seo Ai Club (January 2025)

Character vs Reputation: The True Measure of Success

I recently listened to an episode of Freakonomics Radio titled “If You’re Not Cheating, You’re Not Trying,” featuring an interview with disgraced cyclist Floyd Landis. The conversation eventually turned to John Wooden’s famous maxim: “Be more concerned with your character than your reputation, because your character is what you really are, while your reputation is merely what others think you are.”

Landis, perhaps unsurprisingly for a man whose career was defined by a massive deception, rejected Wooden’s idealism. He argued that in the “real world,” reputation is the only thing that functions. It’s the currency that buys you the contract, the sponsorship, and the adoration. To Landis, character is just a consolation prize you cling to once your reputation has been torched.

I understand his cynicism. But I fundamentally disagree with it.

Landis views reputation and character as two separate assets you can trade, like stocks. But Wooden’s point was deeper: Reputation is merely the shadow cast by character. You can manipulate the shadow for a while – stand in the right light, distort the angle, make yourself look larger than you are – but eventually, the sun moves. The shadow always snaps back to the reality of the object casting it.

In my media career, I’ve seen this physics play out repeatedly. We live in an industry obsessed with the “shadow” – the ratings, the viral potential, the race to be first. I’m certainly not perfect; I’ve made mistakes in my career. But I’ve learned that the “reputation” of a news organization isn’t built on its speed; it’s built on its credibility. It’s built on the boring, invisible machinery of character: fact-checking, sourcing, and the refusal to cut corners when no one is watching.

A journalist can fake their way to a scoop once. They can build a reputation for being “first.” But if that reputation isn’t grounded in the character trait of accuracy, the fall is inevitable. When the correction comes – and it always does – the reputation doesn’t just dip; it evaporates.

Consider the case of Janet Cooke, a Washington Post writer whose heartbreaking profile of an 8-year-old heroin addict won a Pulitzer Prize. The unraveling of her reputation began, ironically, with a celebration of it.

Her former employer, the Toledo Blade, initially rushed to publish a tribute to their former staffer. But the tone shifted when editors compared the Associated Press biography—based on Cooke’s own resume—against their internal personnel files. While Cooke claimed to be a magna cum laude Vassar graduate with a master’s degree, the Blade’s records told the truth: she had only attended Vassar for a year and held a standard bachelor’s degree. Because the character didn’t match the reputation, the entire structure collapsed. Her prize-winning article, “Jimmy’s World,” was exposed as a complete lie, and the Pulitzer was returned.

I’m seeing a similar tension now as I study the business strategy and ethics of Artificial Intelligence. The temptation in the AI space is to let the “reputation” of the technology—the hype, the valuation, the promise of an AI future—outpace the “character” of the build (safety, bias, alignment).

Landis would argue that we should ride the hype wave because “that’s how the world treats you.” But history suggests that tech bubbles built on reputation without underlying substance always burst. The companies that last are the ones where the internal reality matches the external promise.

Warren Buffett famously said, “It takes 20 years to build a reputation and five minutes to ruin it.” Unlike Landis, Buffett doesn’t see reputation as a mask to wear; he sees it as a fragile byproduct of integrity.

Bob Iger, the retiring CEO of Disney, reinforces this in his memoir, The Ride of a Lifetime: “True authority and true leadership come from knowing who you are and not pretending to be anything else.” For Iger, character and decency are not merely “soft skills,” but strategic advantages that define a company’s success.

Floyd Landis believes he was punished for playing the game. I would argue he was punished for mistaking the shadow for the man. Ultimately, the spotlight always falls on a person’s character, not their reputation.