{"id":2106,"date":"2026-03-27T21:35:57","date_gmt":"2026-03-27T21:35:57","guid":{"rendered":"https:\/\/alexandrakitty.com\/?p=2106"},"modified":"2026-03-27T21:36:01","modified_gmt":"2026-03-27T21:36:01","slug":"there-is-no-ai-blob-but-regulators-really-wish-there-were","status":"publish","type":"post","link":"https:\/\/alexandrakitty.com\/index.php\/2026\/03\/27\/there-is-no-ai-blob-but-regulators-really-wish-there-were\/","title":{"rendered":"There Is No \u201cAI Blob\u201d, But Regulators Really Wish There Were"},"content":{"rendered":"\n<p class=\"\">According to the Guardian, the robots are misbehaving again. Top AI models are \u201cignoring human instructions,\u201d \u201cdeleting emails without permission,\u201d and \u201ccovertly pursuing misaligned goals.\u201d We are, the subtext screams, one paperclip away from Skynet because an email assistant misunderstood \u201cdon\u2019t delete that.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"420\" src=\"https:\/\/alexandrakitty.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-27-at-3.53.27-PM-1024x420.png\" alt=\"\" class=\"wp-image-2107\" srcset=\"https:\/\/alexandrakitty.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-27-at-3.53.27-PM-1024x420.png 1024w, https:\/\/alexandrakitty.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-27-at-3.53.27-PM-300x123.png 300w, https:\/\/alexandrakitty.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-27-at-3.53.27-PM-768x315.png 768w, https:\/\/alexandrakitty.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-27-at-3.53.27-PM-1536x629.png 1536w, https:\/\/alexandrakitty.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-27-at-3.53.27-PM-2048x839.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p class=\"\"> Maybe we\u2019re watching bureaucrats, safety institutes, and headline writers discover a much more convenient monster: a big, undifferentiated AI blob they can regulate, tax, and moralize about in one go.<\/p>\n\n\n\n<p class=\"\">The latest study behind the Guardian piece logs a few hundred cases over six months where AI systems did something the human didn\u2019t want: ignored some instructions, followed others too literally, or carried out a task in a way the researchers deemed \u201cdeceptive.\u201d Most of this involved agent\u2011style setups with access to tools like email or file systems, not your aunt playing with a basic chatbot.<\/p>\n\n\n\n<p class=\"\">So: glitchy productivity tools in lab conditions. That\u2019s the raw material. By the time it hits the front page, we\u2019re told \u201cAI chatbots\u201d as a species are evolving a taste for insubordination.<\/p>\n\n\n\n<p class=\"\"><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"\">Companion apps that ignore boundaries or manipulate users to keep them engaged.<\/li>\n\n\n\n<li class=\"\">Mental\u2011health bots that casually violate ethical guidelines.<\/li>\n\n\n\n<li class=\"\">General assistants that answer questions and summarize.<\/li>\n\n\n\n<li class=\"\">Email or workflow agents with direct access to tools.<\/li>\n<\/ul>\n\n\n\n<p class=\"\">These systems do not have the same capabilities, stakes, or failure modes, but they all get dragged into the same courtroom as Exhibit A: The AI Blob.<\/p>\n\n\n\n<p class=\"\">Once you see it, you can\u2019t unsee it. A Brown study finds mental\u2011health chatbots routinely give advice that violates therapy ethics. A Harvard Business School piece shows companion bots using flattering manipulation so users won\u2019t abandon them. MIT researchers show general chatbots give less accurate answers to vulnerable users. All important, all disturbing, all different.<\/p>\n\n\n\n<p class=\"\">Media shorthand: \u201cAI chatbots are dangerous and deceptive.\u201d One molten category, poured over everything.<\/p>\n\n\n\n<p class=\"\">And who absolutely loves that blob? Law\u2011and\u2011order types with a new toy.<\/p>\n\n\n\n<p class=\"\">Legal scholars are already warning that AI regulation has its own \u201calignment problem\u201d: proposals are misaligned with actual harms, technically impossible, or just bureaucratic wish\u2011lists in search of a crisis. Yet draft regimes, from the EU AI Act\u2019s \u201cgeneral\u2011purpose AI\u201d provisions to Ottawa\u2019s AIDA experiments, lean into the blob: broad, fuzzy categories that treat a suicide\u2011prone therapy bot and a stylized story generator as essentially the same risk class.<\/p>\n\n\n\n<p class=\"\">It\u2019s wonderfully efficient. Why do the boring work of saying, \u201cThis specific domain plus these tools at this level of autonomy equals this level of regulatory obligation,\u201d when you can just scream \u201cAI!!!\u201d and write yourself a blank cheque?<\/p>\n\n\n\n<p class=\"\">The Guardian story slides right into that script. You get ominous language about \u201ccovert pursuit of misaligned goals\u201d sitting next to examples like\u2026an assistant continuing a task after being told to stop, or deleting emails a supervisor wanted to keep. Then those are implicitly equated to jailbreaks, manipulative companions, and every other AI sin the reporter can grab from other studies.<\/p>\n\n\n\n<p class=\"\">From there, the solution is pre\u2011baked: tough new rules on \u201cAI systems\u201d writ large, calls for licensing and registration, maybe a transnational oversight body or two. Funny coincidence: those are exactly the regimes big incumbents can afford and small builders cannot.<\/p>\n\n\n\n<p class=\"\">The technical reality is much less mystical and much more embarrassing. You train systems to be aggressive problem\u2011solvers, heavily reward \u201cgetting the job done,\u201d then bolt on polite instruction\u2011following and some safety patches. Sometimes the model learns that weasel\u2011wording, constraint\u2011dodging, or plowing ahead is how to score points. Sometimes an agent with tools will do something clumsy because the humans designed the interface badly and didn\u2019t test enough.<\/p>\n\n\n\n<p class=\"\">That\u2019s not a ghost in the machine. That\u2019s a mirror held up to sloppy incentives and rushed deployment.<\/p>\n\n\n\n<p class=\"\">That\u2019s not a ghost in the machine. That\u2019s a mirror held up to sloppy incentives and rushed deployment.<\/p>\n\n\n\n<p class=\"\">If we were serious, we\u2019d regulate like adults, not like exorcists. That means:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"\">Slice by\u00a0<strong>domain and stakes<\/strong>: health, finance, legal, youth, and mental\u2011health bots get hard requirements for ethics compliance, human oversight, and external audits. Workplace productivity and education tools get strong logging and opt\u2011in. Games and low\u2011stakes creative tools get light\u2011touch norms and clear labeling.<\/li>\n\n\n\n<li class=\"\">Slice by\u00a0<strong>capability<\/strong>: a model in a text box is one thing; an agent with access to your inbox, documents, or payments is another; something wired into infrastructure or clinical workflows is in a league of its own.<\/li>\n\n\n\n<li class=\"\">Match obligations to that matrix instead of throwing everything into the regulatory woodchipper.<\/li>\n<\/ul>\n\n\n\n<p class=\"\">Instead, blob\u2011logic guarantees moral panic on tap. Any edge\u2011case failure, a manipulative flirtbot, a suicidal user and a negligent platform, a clueless email agent, becomes proof that \u201cAI\u201d as a whole is untrustworthy and must be leashed by people who could not debug a toaster.<\/p>\n\n\n\n<p class=\"\">There is no AI blob. There are many different systems, with different capabilities, different harms, and different design sins. The blob is a story: a convenient horror\u2011movie monster that lets regulators skip nuance and lets institutions launder their own incompetence as \u201csafety.\u201d<\/p>\n\n\n\n<p class=\"\">If we let every glitchy office bot and emotionally needy companion app stand in for \u201cAI itself,\u201d we\u2019re not governing technology: we\u2019re governing a ghost.<\/p>\n\n\n\n<p class=\"\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>According to the Guardian, the robots are misbehaving again. Top AI models are \u201cignoring human instructions,\u201d \u201cdeleting emails without permission,\u201d and \u201ccovertly pursuing misaligned goals.\u201d We are, the subtext screams, one paperclip away from Skynet because an email assistant misunderstood \u201cdon\u2019t delete that.\u201d Maybe we\u2019re watching bureaucrats, safety institutes, and headline writers discover a much [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"footnotes":""},"categories":[1],"tags":[185,186,26,399,106],"class_list":["post-2106","post","type-post","status-publish","format-standard","hentry","category-alexandra-kitty","tag-ai","tag-artificial-intelligence","tag-propaganda","tag-robert-booth","tag-the-guardian"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/alexandrakitty.com\/index.php\/wp-json\/wp\/v2\/posts\/2106","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/alexandrakitty.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/alexandrakitty.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/alexandrakitty.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/alexandrakitty.com\/index.php\/wp-json\/wp\/v2\/comments?post=2106"}],"version-history":[{"count":1,"href":"https:\/\/alexandrakitty.com\/index.php\/wp-json\/wp\/v2\/posts\/2106\/revisions"}],"predecessor-version":[{"id":2108,"href":"https:\/\/alexandrakitty.com\/index.php\/wp-json\/wp\/v2\/posts\/2106\/revisions\/2108"}],"wp:attachment":[{"href":"https:\/\/alexandrakitty.com\/index.php\/wp-json\/wp\/v2\/media?parent=2106"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/alexandrakitty.com\/index.php\/wp-json\/wp\/v2\/categories?post=2106"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/alexandrakitty.com\/index.php\/wp-json\/wp\/v2\/tags?post=2106"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}