AI Didn’t Kill the Essay. Academia Did.
We keep asking the wrong question about AI and education. The problem isn’t that AI can write student essays. The problem is that those essays are so ecologically invalid that a machine with no lived experience can fake them convincingly.
An assignment with no real audience, no real stakes, and no real feedback from reality is not “rigorous.” It’s theatre. AI simply walked in, learned the script, and is now delivering a more polished performance on cue.
An assignment with no real audience, no real stakes, and no real feedback from reality is not “rigorous.” It’s theatre. AI simply walked in, learned the script, and is now delivering a more polished performance on cue.
In psychology, we talk about ecological validity: does a task measure performance in conditions that actually resemble the environment where the skill is supposed to be used? An exam can be perfectly reliable and still be ecologically absurd if it bears no resemblance to what you will ever do outside that classroom.
Most academic work is low‑stakes, low‑feedback, low‑reality. Grades instead of consequences. Rubrics instead of uncertainty. Compliance instead of improvisation. Under those conditions, success means learning how to satisfy one person’s marking schema on a schedule, not how to think, decide, or act under pressure when the facts are incomplete and the fallout is real.
AI thrives in this artificial ecosystem because it was trained on the same decontextualized, text‑only universe. It’s fluent in the genre of the academic performance. Give it “1,500 words on media ethics, double‑spaced, due Monday” and it will happily produce something that sounds exactly like what that ecosystem already rewards.
But the real world doesn’t look like that.
“Media ethics” in an essay is a prompt. Media ethics in practice is: call a grieving parent, evaluate the credibility of a manipulative source, verify a claim under deadline, then decide what not to publish. There is no rubric for that. There are only trade‑offs, competing obligations, and the knowledge that your decision could harm someone who can’t fight back.
If you design for the real world, the essay doesn’t die. It mutates.
Imagine an investigative assignment where the student does the fieldwork: interviews, observation, document retrieval. AI isn’t the author; it’s the research assistant. It chunks transcripts, surfaces tentative themes, drafts interview guides, or role‑plays a hostile cross‑examination before a sensitive interview. The thinking, what to ask, what to trust, what to pursue, is still irreducibly human.
Or a deception and propaganda module where the student gathers real materials: press releases, court filings, social posts. AI becomes the spin doctor. It generates plausible but misleading narratives that students must attack, annotate, and debunk. You’re no longer grading their ability to regurgitate the “fake news is bad” paragraph. You’re grading their ability to dismantle a living, writhing piece of persuasion.
Or fact‑verification drills built around a chaotic real event. The student is given raw, conflicting information and must identify verifiable claims, design a verification plan, and use AI to surface possible sources. Then they must document which parts could not be delegated: the phone calls, the on‑the‑ground constraints, the human judgment about whether publishing a technically true detail would cause disproportionate harm.
None of this is threatened by AI. On the contrary: these are the kinds of assignments where AI becomes a legitimate part of the workflow and the student’s judgment becomes the thing you’re actually assessing.
So no, AI didn’t kill the essay. Academia did, when it turned writing into ecologically barren theatre and mistook template‑driven performance for proof of higher‑order thinking. AI just wandered onto the stage, hit its marks, and showed everyone how flimsy the script really was.
If an assignment works just as well when a student never leaves their bedroom and lets a chatbot do all the heavy lifting, it is not an assignment. It is a prompt for a language model.
AI did not cheapen human learning. It exposed how cheap we’d already made it. The only way forward is to drag assignments back into the world they were supposed to prepare us for, and let AI carry the paperwork while we finally do the work.
