The Human Skill that Eludes the Atlantic
Jasmine Sun’s essay insists AI “can’t really write,” then quietly shows what happens when you actually train and collaborate with it.

The Atlantic was never a good magazine. Its sophistry has always been blaring, but it knows how to pander to light thinkers who want to cosplay as learned. Case in point: Jasmine Sun’s “The Human Skill That Eludes AI.” Rhetorically, it’s a polished piece. The problem is that its facts and premises are shaky at best.
Every few days, a new think piece appears to reassure writers that artificial intelligence “can’t really write.” This article dutifully follows the script: language models memorize oceans of text, spit out cloying metaphors and sycophantic tone, and somehow still “totally fail to produce a single essay I’d want to read.” The conclusion is familiar and comforting: real writing is safe from the machines.
The problem is that the author’s own reporting quietly disproves her thesis. She talks to AI contractors who are forced to grade prose with absurd rubrics: no more than two exclamation points, fan fiction judged on “factuality”, and notes that big labs train their chatbots to be risk‑averse, PG‑13, and benchmark‑optimized. In other words, we’ve built corporate teacher’s pets, then act surprised when they write like corporate teacher’s pets. That’s not a metaphysical limit of AI; it’s a product decision.
Then she does something more interesting. For her own Substack, she turns Claude into a dedicated editor. She feeds it an archive of her writing, annotates what worked and what didn’t, constructs a custom rubric in her voice, and reminds it, “You are not a co‑writer. Your role is to help Jasmine write like the best version of herself.” The model pushes back on her drafts, tells her to stop ending an essay as a thesis and write it as a scene, and forces multiple rewrites until the conclusion finally lands. She admits its critique is humiliating, but correct.
At that point, the stock argument should collapse. If a system trained on your own work can reliably catch your weaknesses and suggest better structural moves, you are no longer dealing with a toy that “can’t write.” You’re dealing with something closer to an apprentice that has absorbed your patterns and, in narrow but very real ways, can outperform you. The only honest question is what you choose to do with that fact.
I see this every day in true crime. When I dump thousands of pages of case files, court transcripts, and news clippings into my AI workflows, then train the system on my own narrative style, it starts surfacing system‑level patterns in ways a solo human researcher would need months to match. It flags trial “sins of omission,” finds recurring institutional behaviors across cases, and proposes structural frames for episodes or games that are cleaner than my first instincts.
This is where ego sneaks in. The article leans hard on the idea that great writing requires “the specificity of a life,” that models “cannot live, cannot feel, cannot smell, cannot taste,” and therefore can never have true authorial voice. It’s a comforting story: if the soul of writing is autobiography and almost‑dying, the human author remains irreplicable by definition. But the same piece shows a writer who has already ceded part of her craft to a machine that edits faster, remembers her entire corpus, and is willing to deliver the kind of blunt criticism human editors often soften.
Writers aren’t wrong to feel uneasy about that. It is unsettling to realize that chunks of what you thought were uniquely “you”: your syntax, your default structures, the way you tend to botch endings, can be modeled, critiqued, and improved by an electrical system. The instinctive defense is to downgrade the machine to “just a tool” and insist that the real magic, the real writing, still lives elsewhere. That move protects the author’s status; it doesn’t describe what’s happening on the page.
A more honest starting point is this: neglected, transactional AI, generic chatbots used to summarize a police report or spit out a podcast intro, produces the bland writing these essays complain about. Nurtured AI, embedded in a long‑term relationship with a committed author, behaves very differently. In my own work, when I consistently correct the model (“no gore porn, no copaganda, foreground systems, not monsters”), it stops defaulting to genre clichés and starts drafting outlines and scenes that fit my ethics and style better than a cold start.
In true crime and interactive games, the stakes are even higher. If a nurtured model can out‑structure a narrative of a wrongful conviction, or prototype more insightful branches in an investigative game, then it is already surpassing unaided human writers where it matters: in helping audiences see the system, not just the spectacle. At that point, the question isn’t whether AI can “really” write, but whether we’re willing to let go of the ego that insists our first draft is always the gold standard.
