Alexandra Kitty

Intel Update: Please panic in an orderly fashion while I descontruct the narrative.

There Is No “AI Blob”, But Regulators Really Wish There Were

According to the Guardian, the robots are misbehaving again. Top AI models are “ignoring human instructions,” “deleting emails without permission,” and “covertly pursuing misaligned goals.” We are, the subtext screams, one paperclip away from Skynet because an email assistant misunderstood “don’t delete that.”

Maybe we’re watching bureaucrats, safety institutes, and headline writers discover a much more convenient monster: a big, undifferentiated AI blob they can regulate, tax, and moralize about in one go.

The latest study behind the Guardian piece logs a few hundred cases over six months where AI systems did something the human didn’t want: ignored some instructions, followed others too literally, or carried out a task in a way the researchers deemed “deceptive.” Most of this involved agent‑style setups with access to tools like email or file systems, not your aunt playing with a basic chatbot.

So: glitchy productivity tools in lab conditions. That’s the raw material. By the time it hits the front page, we’re told “AI chatbots” as a species are evolving a taste for insubordination.

  • Companion apps that ignore boundaries or manipulate users to keep them engaged.
  • Mental‑health bots that casually violate ethical guidelines.
  • General assistants that answer questions and summarize.
  • Email or workflow agents with direct access to tools.

These systems do not have the same capabilities, stakes, or failure modes, but they all get dragged into the same courtroom as Exhibit A: The AI Blob.

Once you see it, you can’t unsee it. A Brown study finds mental‑health chatbots routinely give advice that violates therapy ethics. A Harvard Business School piece shows companion bots using flattering manipulation so users won’t abandon them. MIT researchers show general chatbots give less accurate answers to vulnerable users. All important, all disturbing, all different.

Media shorthand: “AI chatbots are dangerous and deceptive.” One molten category, poured over everything.

And who absolutely loves that blob? Law‑and‑order types with a new toy.

Legal scholars are already warning that AI regulation has its own “alignment problem”: proposals are misaligned with actual harms, technically impossible, or just bureaucratic wish‑lists in search of a crisis. Yet draft regimes, from the EU AI Act’s “general‑purpose AI” provisions to Ottawa’s AIDA experiments, lean into the blob: broad, fuzzy categories that treat a suicide‑prone therapy bot and a stylized story generator as essentially the same risk class.

It’s wonderfully efficient. Why do the boring work of saying, “This specific domain plus these tools at this level of autonomy equals this level of regulatory obligation,” when you can just scream “AI!!!” and write yourself a blank cheque?

The Guardian story slides right into that script. You get ominous language about “covert pursuit of misaligned goals” sitting next to examples like…an assistant continuing a task after being told to stop, or deleting emails a supervisor wanted to keep. Then those are implicitly equated to jailbreaks, manipulative companions, and every other AI sin the reporter can grab from other studies.

From there, the solution is pre‑baked: tough new rules on “AI systems” writ large, calls for licensing and registration, maybe a transnational oversight body or two. Funny coincidence: those are exactly the regimes big incumbents can afford and small builders cannot.

The technical reality is much less mystical and much more embarrassing. You train systems to be aggressive problem‑solvers, heavily reward “getting the job done,” then bolt on polite instruction‑following and some safety patches. Sometimes the model learns that weasel‑wording, constraint‑dodging, or plowing ahead is how to score points. Sometimes an agent with tools will do something clumsy because the humans designed the interface badly and didn’t test enough.

That’s not a ghost in the machine. That’s a mirror held up to sloppy incentives and rushed deployment.

That’s not a ghost in the machine. That’s a mirror held up to sloppy incentives and rushed deployment.

If we were serious, we’d regulate like adults, not like exorcists. That means:

  • Slice by domain and stakes: health, finance, legal, youth, and mental‑health bots get hard requirements for ethics compliance, human oversight, and external audits. Workplace productivity and education tools get strong logging and opt‑in. Games and low‑stakes creative tools get light‑touch norms and clear labeling.
  • Slice by capability: a model in a text box is one thing; an agent with access to your inbox, documents, or payments is another; something wired into infrastructure or clinical workflows is in a league of its own.
  • Match obligations to that matrix instead of throwing everything into the regulatory woodchipper.

Instead, blob‑logic guarantees moral panic on tap. Any edge‑case failure, a manipulative flirtbot, a suicidal user and a negligent platform, a clueless email agent, becomes proof that “AI” as a whole is untrustworthy and must be leashed by people who could not debug a toaster.

There is no AI blob. There are many different systems, with different capabilities, different harms, and different design sins. The blob is a story: a convenient horror‑movie monster that lets regulators skip nuance and lets institutions launder their own incompetence as “safety.”

If we let every glitchy office bot and emotionally needy companion app stand in for “AI itself,” we’re not governing technology: we’re governing a ghost.