J-Notes #1: How I Used AI to Audit CBS News’s Record
When Bari Weiss tells a room full of CBS News journalists that their work isn’t “producing a product enough people want,” she’s not just critiquing strategy; she’s implicitly trashing a decade of actual reporting. If you’re going to make that kind of claim about a legacy newsroom, the least a grown-up reporter can do is check the receipts.
So I did. With AI. And I labeled it.
Step 1: Ask a concrete question
The prompt wasn’t mystical. It was blunt:
- If CBS News is supposedly broken, what does their track record actually look like over the last ten years?
- Where are the Emmys, Peabodys, Murrows, duPonts, Scripps, and other serious journalism awards, and who at CBS earned them?
This is the kind of question AI is useful for: not opinion, but finding and organizing sprawling factual trails.
Step 2: Use AI for the grunt work
I tasked an AI research assistant with a specific, bounded job: compile a list of CBS News awards from roughly 2016–2026, grouped by year and program (60 Minutes, CBS Evening News, Sunday Morning, Face the Nation, 48 Hours, etc.). It pulled from award bodies, coverage, and network press releases: Emmys, Peabodys, Edward R. Murrow Awards, duPont-Columbia Awards, Scripps Howard, National Press Foundation, and more.
That’s what AI is good at: chewing through volume. The judgment comes later.
Step 3: Verify like an old-school reporter
Then I did the part AI can’t do for me:
- Spot-checked big-ticket claims (for example, 60 Minutes’ Peabody institutional award and CBS leading the 2025 News & Documentary Emmys) against Peabody listings, the Emmys’ own PDFs, and third-party write-ups.
- Confirmed duPont-Columbia counts and specific CBS investigations (such as Norah O’Donnell’s reporting on sexual assault in the U.S. military) via Columbia and the National Press Foundation.
- Cross-referenced Murrow wins for overall excellence, continuing coverage, and writing with RTDNA releases and CBS’s own announcements.
AI did the legwork; I did the skepticism.
Step 4: Separate evidence from commentary
The result is two different artifacts:
- The blog post: my analysis of the Bari Weiss “all hands” performance—tone, power plays, the absurdity of lecturing a newsroom with that record via PowerPoint.
- The PDF report: a dry, sourced list of CBS News awards over the past decade, compiled with AI assistance and clearly labeled as such.
That separation matters. One is argument; the other is an evidence pack anyone can audit.
Step 5: Disclose AI use on purpose, not as a confession
The PDF explicitly notes that the award compilation was generated with Perplexity. I’m not hiding the tool; I’m making it part of the method.
As I said in the previous post: because this is 2026 and not 1996, I did what any competent reporter should do: I used an AI research assistant to compile a decade of CBS News awards, and I’m labeling that clearly so you can check the receipts yourself.
Ethics and AI groups keep telling newsrooms to do exactly this: be upfront about what AI did, why it was used, and how humans verified the output. This is that, in practice.
Step 6: Turn transparency into a counter-argument
Weiss’s pitch to staff leans heavily on the idea that AI has turned “basic information into a commodity,” and that only “revelatory journalism” can save them. My workflow takes that premise and flips it:
- Yes, AI has commodified basic lookup, but that’s exactly why a serious journalist uses it aggressively for the boring, exhaustive parts.
- The value is in what you do with the information: how you contextualize it, how you challenge management narratives, how you show your work.
So when she arrives with a PowerPoint to tell Emmy-, Peabody-, Murrow-, and duPont-decorated journalists they’re doing it wrong, I respond with something else: a transparent, AI-assisted audit of the record they actually built.
That’s the point of this little experiment. It’s not just about CBS; it’s about what future-forward reporting actually looks like.
You don’t just “have takes.” You run the numbers, you document the method, you use the machines, and you sign your name to both the argument and the process.
What I’ll do differently next time
- I’ll keep using AI this way: as a fast, obsessive researcher, especially for timelines, award histories, and court or regulatory records, while making the verification layer more explicit for readers each time.
- Future J‑Notes will likely include screenshots or linked snippets from primary sources (where appropriate) so people can see the paper trail behind the AI‑assisted compilation.
- I will also standardize a brief AI‑use disclosure line inside major posts themselves, so readers start to recognize a consistent pattern: what the machine did, what I did, and how they can audit both.
