AI Fangirl with Footnotes: A Manifesto for Loving the Tools Without Joining the Cult
I like AI.
I like it the way I like a good research library: because it gives me access to more than I could reach alone, faster than I could reach it, and because when it fails me I can usually figure out why if I look carefully enough.
I am not afraid of it. I am not in awe of it. I am not performing neutrality about it to seem serious. I have been working with AI since 1994 as an undergraduate psychology student. My roots run deep.
I am, genuinely, a fangirl.
And because I am, I refuse to pretend it is magic.
What fangirl actually means
Fangirl gets used as a dismissal. It conjures someone who screams at a concert and doesn’t think critically about the music.
That’s not what it means here.
Being a fangirl means you pay attention. You notice things. You track the details. You care enough to know the difference between what the thing actually does and what the mythology says it does.
The most meticulous critics of any artist, athlete, or institution are usually the fans. They can’t be fooled by surface‑level hype because they know the real thing too well.
That’s the stance worth having on AI: close enough to know it well, rigorous enough not to lie about what you see.
What the cult looks like from outside it
There is a Church of AI, and it has two denominations that hate each other and are secretly the same.
The Believers say the tools will solve everything, reshape everything, disrupt everything. They speak in revelations. They get very angry when you point out failure modes.
The Heretics say the tools are hollow, dangerous, extractive, catastrophic. They speak in warnings. They get very angry when you point out genuine capability.
Both are running on the same emotional fuel: the need to feel certain in a fast‑moving environment where certainty isn’t actually available.
The Believer needs AI to be infinite so their early adoption feels like genius. The Heretic needs AI to be fraud so their resistance feels like integrity. Neither is actually looking at the tools. They are looking at themselves, reflected in the tools.
Footnotes as method
The “with footnotes” part is not decorative.
It means when I say something about AI, I have a reason for it that I can trace. Not a vibe. Not a talking point I absorbed from a podcast. A reason, with a chain of evidence behind it.
Where did this model come from? Who trained it, on what data, under what commercial pressures?
What does it actually do well, versus what does it feel like it does well, which are not always the same thing?
Where does the narrative around it diverge from the mechanics underneath it?
Those questions are not obstacles to enthusiasm. They are what makes enthusiasm earned rather than borrowed.
When I enjoy using a tool, I enjoy it more for understanding roughly how it works, not less. When it fails me, I learn something rather than just feeling betrayed. When someone is bluffing about it, I can tell, because I have the footnotes and they don’t.
The history matters
AI did not arrive from nowhere.
It arrived from a long, contested history of decisions about what to optimize for, whose data to use, which problems counted as worth solving, and who got to define what “intelligence” even means in this context.
Understanding that history doesn’t make you cynical. It makes you accurate.
The people who think AI is brand new are easy to dazzle and easy to disappoint. They get the apocalyptic headlines about job replacement and the breathless headlines about curing cancer, and they lurch between panic and rapture depending on the news cycle.
If you know the actual history, the funding cycles, the winters, the corporate pivots, the research debates, the labour politics of annotation and training, you are harder to manipulate. You already know the story has never been simple.
On not performing neutrality
I want to be clear about one thing: I am not neutral about AI and I am not pretending to be.
I think it is genuinely interesting. I think it produces real value when used carefully and honestly. I think it is going to change the conditions of creative and intellectual work in ways most people are not yet taking seriously enough.
I also think the culture around it is saturated with bad faith: investors who hype it to inflate valuations, critics who denounce it to build personal brand, journalists who have found a reliable way to generate outrage traffic, executives who use it as a fog machine for cost cuts.
Performing neutrality about all of that would not be balance. It would be cowardice dressed as objectivity.
The byline, as we’ve now established, exists so someone can be held responsible for their claims. This is mine: I think the tools are worth taking seriously, I think the cult is worth resisting, and I think the most useful position is neither cheerleader nor doomsayer but rigorous enthusiast.
What rigorous enthusiasm actually looks like
It looks like:
- Using the tools constantly, in real workflows, for real problems, and paying attention to where they help and where they fail.
- Changing your mind when you encounter new evidence, publicly, without treating it as a humiliation.
- Asking “how does this work?” before “what does this mean for society?” because the second question is unanswerable without the first.
- Calling out hype when you see it, including from people you otherwise agree with.
- Admitting what you don’t understand yet, because you know the list of what you don’t know keeps growing as you learn more, not shrinking.
That last one is the real tell. The Dunning–Kruger peak is the moment you feel most certain. The actual competence slope is the moment you realize how much you still can’t see.
Every time the tools surprise me, and they still do, regularly, I count it as information, not a betrayal and not a miracle. Just information.
The thing about loving something you don’t romanticize
People sometimes find this position confusing. “If you see all these problems clearly, how can you still be enthusiastic?”
Because clarity and enthusiasm are not opposites. Only romantics think love requires blindness.
You can love a city and still be clear‑eyed about its traffic and its politics. You can love a writer and still notice when they are coasting. You can love a set of tools and still insist on knowing what they’re actually doing.
In fact, that’s the only kind of love that lasts: the kind that doesn’t need the object of affection to be perfect in order to remain engaged with it.
The people who need AI to be magic will be devastated when it isn’t. The people who need it to be fraud will be wrong more and more often as the systems improve. The people in the middle, curious, rigorous, genuinely fond of the tools but not enchanted, will still be here, still working, still learning, long after both camps have moved on to the next moral panic.
That’s where I live.
AI fangirl. With footnotes.
