I Just Got My AI Certificate. Here’s What I Actually Learned.

To borrow a phrase from Howard Stern, I am something of a king of all media. Radio, television, digital — I’ve worked in all three, and I’ve had to learn the rules of each one from scratch.

It started in college, when I was fortunate to land a part-time gig as a DJ at Laser 104.1 in Allentown, Pennsylvania.  I had no idea what I was doing. I figured it out. An internship at FOX 29 in Philadelphia got me into television, which turned into a career. Eight years in, I went back to school at Temple University to earn my MBA, while working overnight shifts producing the morning news at 6abc. I wasn’t the most rested student, but I graduated.

Then came the internet. Nobody handed me a manual for that either. I self-taught: reading voraciously, experimenting constantly, and talking to anyone who knew more than I did. That’s how I’ve always operated.

So when AI started reshaping the media business — not gradually, but all at once — I decided I wasn’t going to learn it from the sidelines. I enrolled in Johns Hopkins University’s AI for Business Strategy course. This week, I received my certificate of completion.

Before anyone rolls their eyes — yes, I know what “AI certificate” sounds like. It sounds like a LinkedIn flex. It isn’t that. It’s the result of months of real coursework: essays, lectures, readings from the World Economic Forum and MIT and McKinsey, and a final project where I built a full AI proposal for a school district from the ground up. Twelve weeks. Real work.

Here’s what I actually walked away with.


The thing nobody tells you about AI

The biggest surprise wasn’t the technology. It was realizing how little most business leaders — including me — understand about what AI actually does inside an organization.

We talk about AI as if it were a feature you add. A button you push. It’s not. AI doesn’t just automate tasks. It reorganizes how work gets done. The McKinsey framing that stuck with me: companies are moving toward “minimum viable organizations” — lean structures in which AI handles structured, repeatable work, and humans focus on oversight, judgment, and context.

That changes everything.


What the course actually covered

The curriculum was broader than I expected. We started with the AI landscape — the history, the current state, who the major players are and why. Then it got practical fast: how businesses are actually deploying AI, how to optimize it, and, critically, what can go wrong.

The week on AI bias and risk was the one that hit me hardest. In journalism, we already live inside the trust crisis. Audiences can’t tell what’s real anymore. An AI that performs “slightly better than a human” at spotting misinformation isn’t good enough — that was the core of an essay I wrote for the course. The bar for AI in media has to be higher than the human baseline, because the stakes of getting it wrong are higher.

We also covered generative AI in depth — not just what it is, but how to use it responsibly for actual business purposes. And the final weeks got into scaling AI projects and managing them at the enterprise level. What does it look like when you’re not just piloting something, but running it at scale across an organization?

The final project brought it all together. I built a full vendor proposal — a fictional AI company called EduAI Solutions — pitching an AI-powered learning platform to a real school district. Every section had to hold up: the executive summary, the implementation strategy, the data privacy compliance, the cost structure. It was the most useful assignment I’ve done in any course, because it forced me to think like someone responsible for the outcome, not just someone writing about it.


What this means for journalism

I came in thinking AI was something I needed to manage in my newsroom. I left understanding it’s something I need to lead through.

Two-thirds of U.S. newsrooms have already integrated AI into at least one workflow. The roles being created — AI Ethics Editors, Automated Content Managers, Data Journalists — are no longer niche. They’re becoming core. And the journalists who thrive won’t just be good storytellers. They’ll need data literacy, an understanding of how large language models work and where they fail, and the judgment to know when to trust the machine and when to override it.

That’s a different journalist from the one I trained to be. It’s the one I’m working to become.


ABL: Always Be Learning

Here’s the thing I’ve told younger journalists for years: the moment you think you’ve figured it out, you’re done. The industry moves too fast. The audience moves too fast. You have to stay a student.

Every transition in my career has required me to start over as a learner. Radio to TV. TV to digital. The people who get left behind in this business aren’t the ones who admit they don’t know something. They’re the ones who pretend they do.

I chose Johns Hopkins specifically because the course is big-picture focused. Not “here’s how to prompt ChatGPT.” It’s about strategy — how AI changes the structure of organizations, how leaders need to think about deploying it, and what the risks look like at scale.

The next frontier I’m focused on is agentic AI — systems that don’t just answer questions but take actions, make decisions, and complete multi-step tasks on their own. That’s where this technology is heading fast, and it has enormous implications for media organizations. I’m already working to understand it.

Getting this certificate at this stage of my career wasn’t about proving something to anyone else. It was about staying useful — to my team, to my company, to myself. The executives and media leaders who will matter in the next five years aren’t the ones who handed AI questions off to someone else. They’re the ones who got in the room, got their hands dirty, and figured out what they were looking at.

I don’t have all the answers. But I know which questions to ask now. And I know where to go next.


Bob Monek is a veteran broadcast journalist and media executive who has worked in radio, television, and digital media. He completed the AI for Business Strategy certificate program at Johns Hopkins University in April 2026.

What ONA 2026 Taught Me About AI, Newsrooms, and the Leadership Gap

I went to my first Online News Association conference four years ago, and I came back energized. Conferences can make you feel this way — like the ideas alone are enough to change something.

Then life took over. Deadlines and the pace of a daily newsroom became overwhelming. ONA became something I kept meaning to get back to.

This year I finally did. What I discovered at ONA in Chicago, not surprisingly, is the conversation has dramatically changed — from whether to use AI to something more difficult: how to use it without breaking the human systems that make journalism work.

That’s the right question. And it’s overdue.

Four years ago, AI in the newsroom was peripheral — something experimental, easy to ignore. This year, it was the center of gravity. Every room, every hallway, every lunch. Not “should we use AI” — that debate is over. The harder question is how to use it without losing what makes journalism worth doing in the first place.

From Evangelism to Pragmatism

What stood out most wasn’t any single tool or framework, though there were plenty worth bringing home. It was the tone.

There’s less evangelism now and more pragmatism. People are talking about AI the way they talk about any other production tool — what does it actually do well, where does it fall down, and who’s responsible when it gets something wrong?

That shift matters. AI isn’t the hard part anymore. Alignment is.

The 80/20 Reality

The Associated Press put a number on something many newsrooms are starting to feel: 80% of a process can be automated, but at least 20% has to remain human.

Editing. Fact-checking. Judgment.

That ratio isn’t just technical — it’s a policy position. And having something that concrete to hand to a skeptical newsroom is more useful than any demo.

The Most Useful Work Isn’t Flashy

Some of the best ideas came from smaller organizations doing unglamorous work.

City Bureau is using generative AI to synthesize civic meeting notes. Sahan Journal has built custom GPTs to create personalized media kits for sales calls — not an editorial use case, a revenue one, and it works.

And then there was El Vocero de Puerto Rico’s cautionary story about an AI agent they’d named Victor. Over time, Victor started producing bad data and sending emails nobody had asked for.

Victor got fired.

The lesson wasn’t that AI fails. It’s that you have to keep managing it long after launch — which is not how most organizations treat a tool once it’s deployed.

Culture Is the Bottleneck

The sessions on culture and leadership hit closest to home.

CNN talked about putting leaders visibly at the front of AI adoption — not as evangelists, but as practitioners. The fastest way to reduce fear in a newsroom is to show that the people above you are figuring it out too, not just mandating it from a distance.

Reuters has reorganized around cross-functional squads: editorial, product, engineering, and data science working together on specific problems instead of handing work off in sequence.

That’s not a workflow tweak. It’s a structural change. And it requires a level of trust most organizations don’t build quickly.

The Gap That’s Already Here

The tools are here. They’re getting better, quickly.

What’s lagging is the culture and leadership needed to use them well — and that’s not a technology problem. It’s a people problem. Which means it’s slower, harder, and ultimately more important than any product release.

One number from a Thomson Reuters Foundation study has stayed with me: 81% of journalists in the Global South already use AI daily or weekly. Only 13% operate under any formal newsroom policy.

That’s not a regional anomaly. It’s a preview of what happens when technology outruns leadership.

And right now, it is.

AI Ethics in Journalism: Beyond Human Baseline

The “human baseline” approach posits that the ethical success of artificial intelligence is achieved when its decision-making mirrors or marginally improves upon that of a competent human.  In the classic “trolley problem,” this implies that if an AI can consistently choose the “lesser of two evils” with more precision than a panicked human, it has cleared the ethical bar.

However, as the media and journalism industry increasingly integrates generative AI and automated editorial systems, it is becoming clear that a “slightly better than human” standard is insufficient. In the context of information dissemination, a human-level baseline for AI is not a gold standard; it is a liability.

While comparing AI to the human baseline in moral dilemmas reveals the machine’s capacity for consistency, it fails to account for the unique accountability required in journalism.  

Because audiences in 2026 are caught in a “breaking verification” crisis where trust is the ultimate currency, an AI that is merely “slightly better” than a biased human is ethically insufficient. To be truly ethical, AI in media must move beyond mimicking human choice to provide a level of transparency and evidentiary rigor that transcends a journalist’s capability.

Our newsrooms are facing a speed-versus-verification dilemma.   The human baseline for a journalist is breaking the story vs. being 100% accurate.   AI’s logic is fundamentally different.   AI shifts control from individual journalists to automated systems optimized for engagement and scalability.   Therefore, an AI that performs ‘slightly better’ than a journalist at producing content quickly may be ethically inferior if its underlying logic lacks the transparency and evidentiary rigor that defines journalistic integrity.

Because so much information is published in many ways across many platforms, audiences are having a difficult time distinguishing fact from fiction. 

“‘Breaking verification’ will replace ‘breaking news’ in 2026, and trust will decide who survives,” according to Vinay Sarawagi, co-founder and CEO of The Media GCC.

Audiences need to see evidence and sources to back up what they see online, because seeing is no longer believing.   If AI only does as well as humans at spotting fakes, it’s not enough. To solve the trust crisis, the AI must be exponentially better at citing sources.

In 2005, Wallach and Allen argued that the principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. They distinguish between operational morality, in which an AI simply follows pre-programmed human safety rules, and functional morality, in which a system can independently navigate moral dilemmas.  In journalism, an AI that merely mirrors an editor’s baseline choices is operating within a limited framework.   If the media is to serve the public’s best interests, a journalist AI must move toward a functional morality that transcends basic human instinct and provides the transparency and accountability the public expects.

From a strategic standpoint, “slightly better” is a recipe for disaster.   If AI-generated content results in a libel suit or negatively impacts a company’s stock price, the defense that AI is slightly more accurate than an average human is a losing argument.  As the media shifts into what is being termed the ‘Answer Economy’, the traditional value proposition of a newsroom is being disrupted. When AI models synthesize reports into a single summary, the value of a news organization is no longer just the ‘answer’ or the scoop itself, but the auditable trail of evidence that allows that answer to be verified (Seo Ai Club, 2026). If an AI only meets the human baseline for producing a plausible-sounding summary without providing this rigorous, machine-readable proof of its sources, it fails to meet the ethical demands of a 2026 audience.

Note: This is an essay originally written for a course on AI and business strategy at Johns Hopkins University.

References

Wallach, Wendell and Allen, Colin. “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches.” Ethics and Information Technology volume 7, no. issue 3 (September 2005): 149-155. https://link.springer.com/article/10.1007/s10676-006-0004-4.

Li, Haoran et al. “Artificial Intelligence and Journalistic Ethics: A Comparative Analysis.” Journal of Journalism and Media volume 6, no. issue 3 (August 2025): 105. https://www.mdpi.com/2673-5172/6/3/105.

Mee, S. et al. “Moral judgments of human vs. AI agents in moral dilemmas.” Scientific Reports volume 13, no. issue 1 (February 2023). https://pmc.ncbi.nlm.nih.gov/articles/PMC9951994/.

Simon, Felix.How AI reshapes editorial authority in journalism.” Digital Content Next (June 2025)

Reuters Institute.How will AI reshape the news in 2026? Forecasts by 17 experts around the world.” Reuters Institute for the Study of Journalism (January 2025)

Seo Ai Club.The Answer Economy: A Comprehensive Analysis of Answer Engine Optimization Tracking Software and Strategic Market Leadership.” Seo Ai Club (January 2025)