Manipulators Gonna Manipulate: The Scaffolding of Anti‑AI Rhetoric (Fear, Virtue Signalling, and Weaponized Competence)

I chose the headline precisely because it has been done to death. There have been countless permutations of it, yet everyone who uses it never seems to think anything of it.

Anti‑AI propaganda on LinkedIn and X mostly recycles the same arguments from the start. No attribution of where the original idea came from, no deviation in tactics, and no evolution in the script: a red flag that many of the people arguing against AI have no idea what it actually is. Arguments from 2022 are indistinguishable from the ones in 2026, while AI itself has significantly progressed in the meantime.

In fact, AI has moved from early generative image tools in 2022 to agentic, multimodal systems and edge models in 2026. So the arguments of 2022 have not caught up to the actual technology, but you would never know it listening to the anti‑AI brigade. They memorized someone else’s complaints, appropriated them, and have been stuck in limbo ever since.

When your arguments are fixed as reality evolves and progresses, you become trapped in your own perceptions, which drift further and further out of alignment with that reality, yet the anti‑AI brigade seems blithely unaware. Their arguments persuade fewer people, but instead of updating their criticisms, they double down on them. We see them constantly hoping for a bursting AI “bubble,” treating any obstacle as a sign of its imminent death, and otherwise praying for its utter and complete destruction, as if their angry decrees on social media could alter the outcome.

Of course, that is not happening. They are still recycling “stochastic parrot” talking points and 2022 plagiarism panics while we are already working with tools that can orchestrate multi‑step workflows, interpret images and audio, and run locally on edge devices.

Try to keep up, please.

But they will not. They cannot. Why?

Because people who cannot adapt to change have no feel for fluidity. They memorize a fixed set of rules, then devise arguments to justify a rigged ecosystem where they are, or at least feel, superior to everyone else, and they refuse to let go of that version of reality.

Creative people can take a rock and do countless constructive things with it: they can be inspired to write a fable or a philosophy, turn it into a tool (from a hammer to sandpaper), turn it into art, build a home with it, use it as a clue to unlock mysteries of the environment and the past, solve a murder, and even make technology with it.

And it is “just a rock.”

People who are stuck at a fixed point in time cannot take a rock and construct anything with it. At most, they can throw it at other people, trying to destroy those who want to evolve, or at least try another way.

People who believe in progress are fundamentally different from people who want to stay stagnant and never change. The stagnant do not keep people in their lives who push them to expand, and they refuse to practice, fail, modify, and improve because they are stuck in rote routines. To them, mastery is climbing the same mountain repeatedly and then strutting around as if they have conquered the entire range.

They will not suddenly go swimming to the other shore. They will not dig a tunnel. They will not even build a new mountain. They take their single set of memorized actions and construct a narrative of their infinite, eternal superiority.

This stagnant thinking has been reinforced through social media, where narcissistic scaffoldings have been cemented for an entire generation.

However, trying to keep people stuck in a rut is not easy because reality has many paths, most heading away from both the past and the present.

So how do people who are incapable of learning new methods, yet want the entire planet frozen at a single point in time, try to rope in their pigeons and get them to agree that flying free is inferior to staying locked in a cage?

It takes manipulation, of course, and propaganda.

If you spend time reading social media posts, it becomes clear these are not grassroots complaints. You start to see specific patterns, and those patterns are data points:

  1. Pro‑AI posters are not the ones trying to force anyone into using AI. The loudest calls for “assimilation” actually come from the people who rail against AI, demanding that everyone conform to their rejection of the technology. They are not resisting assimilation; they are insisting that everyone else assimilate to their fear.

2. Pro‑AI users are marvelling at what they can do with it. They share experiments, openly note weaknesses, and jump on Discord servers to tell AI companies what broke under stress tests and what is on their wish lists. There is clear evidence of direct interaction with the technology, which means they can talk about AI intelligibly.

Those against AI are, by contrast, proud of their ignorance. There is no “I tried using this technology, and these are my results.” There is only, “I refuse to touch it, and that refusal is my credential.”

So those are tells, but they are not the only ones:

3. Anti‑AI non‑users do not rely on logical arguments; they lean almost entirely on moral arguments, but without actual morals. It is a form of shaming in which they cast themselves as the superior ones and anyone with a different take as immoral.

4. They construct an us‑versus‑them divide with no middle ground. They present themselves as heroes without proof and frame those who embrace AI as defective, inferior villains who lack talent, even though anti‑AI believers do not provide any evidence of their own capability or competence.

5. They use fear, the biggest tell of propaganda. The script is simple: everyone will lose their jobs, the end. There is no room for the possibility that new jobs will arise; only the claim that people will be replaced. The scaffolding echoes the same latent logic as genocidal rhetoric: some people are “obsolete” and must be cleared out.

6. There is no intelligent texture to their discussion of AI’s uses, the different types of AI, or even the history of AI. None. This void signals that this group has no real understanding of the technology. It is just folksy logic, meaning their comprehension of the topic is not based on data.

7. Because these individuals are reactionary and unable to adapt, they do not look good next to AI advocates who are actually accomplished. So what happens? They try to weaponize competence: accusing others of everything from arrogance to elitism, even though their own believability hinges on them being both arrogant (trying to enforce a future and beliefs on others) and elite (assuming they are superior enough to pass weeping judgment on strangers without evidence or expertise).

8. Their arguments are not based in fact, but in logical fallacies: sink‑or‑swim framing, confirmation bias, and an appeal to authority in which they and their in‑group appoint themselves as the moral authority.

Finally, there is a sense of entitlement: these people argue that they have the absolute right to stop progress and to dictate what counts as art, writing, moral behaviour, and what is acceptable for society. When a data centre is cancelled, the anti‑AI crowd cheers as if they have “saved jobs,” even though what they have actually done is block thousands of construction and trades jobs and a stream of permanent roles: the very thing they claim AI is stealing. The jobs exist; what they object to is that the jobs are tied to a technology they refuse to understand.

More interesting is that people who were once effectively prohibited from starting their own companies now have AI democratizing entrepreneurship. The share of new U.S. startups founded by solo founders has climbed sharply (roughly from about one‑fifth to over one‑third in the last decade), with analyses explicitly tying that shift to AI’s ability to automate and accelerate workflows.

Surveys of “elite” freelancers and independent product workers find that 80–90% say generative AI has increased their productivity and earning potential, and a majority say it makes them more likely to go independent rather than stay in a traditional job. In other words, AI is not just something big firms use to cut payroll; it is also the infrastructure that lets individuals spin up micro‑businesses, one‑person agencies, and new creative practices that would have been structurally impossible or prohibitively expensive a few years ago.

What we are seeing is the rise of independent businesses: with large companies forcing RTO, many of their best employees now have a way to work remotely and create a viable living with AI instead of returning to an office they no longer need.

If we only listened to those opposed to AI, we would never realize what new and exciting possibilities await us if we break away from rote routines and habits anchored in yesterday’s technological limitations. That is the entire point of their rhetoric. They do not want to show you what creative or competent people can do with AI; they want to weaponize their own achievements so that no one else even tries, and so that the ones who do succeed are isolated and, if possible, cancelled.

That is not progress; it is emotional abuse and sabotage. And there is no excuse or justification for it.