Alexandra Kitty

Intel Update: Please panic in an orderly fashion while I descontruct the narrative.

The Damage Report


Where reputations, lies, and PR campaigns get slabbed. Autopsies on media, crime, and power, no anesthetic.

The Confirmation Bias and Dunning-Kruger: How to think like an AI Doomer.

AI is not the alien invader in the classroom; it is the mirror, and the doomers hate what it shows them. They jab a finger at the machine, insisting they can “just tell” when a student used it, without ever noticing the three fingers quietly pointing back at their own system.

A table of self‑assured academics confidently declaring a passage “obviously human” while a woman nearby, who knows it was AI‑generated, silently clocks their confirmation bias and Dunning–Kruger.

For more than a century, we have trained students to write to a template. You are handed a rubric, a five‑paragraph structure, a list of “transition words,” a checklist of “scholarly tone,” and a few ideal sample essays. You are graded not on how you think, but on how closely you hit the pattern. When large language models appeared, they did not invent those patterns; they simply ingested the industrial output of a culture that had already decided writing was about compliance. The machine’s “voice” is nothing more than the echo of what millions of people were forced to sound like.

That is why AI feels so familiar and, paradoxically, so “suspicious.” Of course everyone is using it right now; it is new, it is everywhere, and it has the same cultural gravity as a new Taylor Swift drop. It would be shocking if people weren’t poking it, prodding it, and pushing every assignment through it just to see what happens. In that environment, guessing “AI did it” is not forensic insight; it is base‑rate thinking dressed up as genius. If half the class might be using the tool in some way, you can loudly claim to spot AI and be “right” often enough to feed your ego, even when your reasons are completely wrong.

What they are really practicing is a two‑step of confirmation bias and Dunning–Kruger. They start from the belief that AI is everywhere and students are lazy, so any slightly stiff or generic paper “must” be machine‑written and every hunch that lands becomes proof they were right all along. Their hits are loudly celebrated, their misses are quietly erased, and the growing pile of misfires never dents their certainty. The less they understand about how large language models or detectors actually work, the more convinced they are that their unsupported vibes are superior to both.

The doomers’ confidence is a case study in confirmation bias. They start with the assumption that students are cheating with AI. They read a paper that feels generic, stiff, or over‑polished, and it matches their expectation, so they declare victory. They never ask whether the “tell” they are seeing is a machine artifact or the inevitable result of a student trying to survive a rubric. The prose is cautious, full of hedging, weighed down with “In conclusion” and “This essay will argue”—so it must be a bot. That would be remarkable, except those are precisely the moves generations of students have been drilled to perform.

Then come the tools that claim to make this intuition “scientific.” Turnitin bolts on an AI detector; universities rush to enable it, and students are suddenly being accused based on a percentage score. Within months, institutions begin retreating. Vanderbilt, for instance, disables Turnitin’s AI detection feature, explicitly warning that its accuracy and false positives make it unsuitable for high‑stakes decisions. Other analyses point out that these detectors are more likely to mislabel work from non‑native speakers and students whose writing does not conform to the dominant pattern, turning “academic integrity” into another channel of structural bias. The magic detector turns out to be another pattern matcher guessing in the dark.

Yet the human doomers persist in claiming that their gut can do what the tools cannot. They talk about “tells” as if they have discovered some secret watermark on AI prose: too smooth, too formal, too organized, too generic. They ignore the obvious: those are precisely the qualities school has demanded from students all along. The training data for AI did not fall from the sky. It came from decades of textbooks, essays, journal articles, and online posts, all optimized for the same narrow notion of “proper” writing that now sits in their marking rubrics. When they say, “I can tell this is AI,” what they often mean is, “This looks like the kind of writing I forced you to imitate.”

I once heard a professor say he disliked Turnitin because he suspected it was using student papers to train its AI model. The implication was that something pure and original was being harvested. The reality is more banal and more damning. Students themselves were trained on templates, exemplars, and lecture slides; their writing is already the derivative output of a constrained system. They are not artisanal producers whose unique voices are being strip‑mined; they are factory workers on an assembly line they did not build. AI is not corrupting a pristine well; it is drinking from a reservoir that was chlorinated long ago.

This is where the three fingers point back.

First finger: you trained students to write like a dataset. You rewarded the ones who most perfectly mimicked the model answer and punished the ones who sounded too much like themselves. You streamlined writing into a set of rules so clear that a machine could learn them in bulk.

Second finger: you outsourced judgment to tools. You normalized plagiarism detectors and similarity scores, and now AI detectors, turning trust into a dashboard and pedagogy into policing. You built a regime where students are presumed guilty and must prove their innocence to an algorithm and a professor’s hunch.

Third finger: you elevated your intuition into dogma. When the detectors falter, you do not question the premise; you insist your instincts are infallible. You scan a paragraph, feel a vague sense of “too clean,” and decide the machine wrote it. When you are challenged, you point to the culture’s panic as proof: “Everyone is using AI, so I’m probably right.” That is not expertise. That is superstition calibrated to the news cycle.

The irony is that AI is the most honest actor in this drama. It does not pretend to be original; it predicts what word is likely to come next, based on the patterns it has seen. It is a statistical mirror held up to our collective writing habits. When educators recoil from what they see in that mirror and scream “fraud,” they are not uncovering a deception; they are catching a rare, unfiltered glimpse of the system they built.

The real scandal is not that machines can now write like students. It is that students have long been forced to write like machines.