When a label shows up before the evidence, the label is the tell.
I keep seeing the same move. Somebody posts a technical observation about working with AI. Not a manifesto. Not AGI fan fiction. Just a plain engineering report from the field. What worked. What broke. What still looks shaky. And the reply is two words.
AI psychosis.
No counterexample. No benchmark. No failure analysis. No “show me the eval.” Just a little diagnostic drive-by and on to the next quote tweet.
Here’s the thing. Psychosis is a real clinical word. It means something. It is not just a fancier way to say “this guy online annoys me.” The phrase already has enough baggage on its own. Using it as a casual insult for engineers who are actually touching the tools is sloppy before the conversation even starts.
And I think that sloppiness points at something real.
I don’t know if Relevance Anxiety is the perfect name for it. I might hate the phrase in six months. Fine. Name it something else. But I know the posture when I see it.
It is not skepticism.
It is a status reaction.
Relevance Anxiety is what happens when a person stops arguing with claims and starts pathologizing the people making them. The conversation never reaches the object. It never gets to the model, the failure mode, the benchmark, the workflow, the output, the code review, the security risk, the hallucination pattern, any of it. It stays pinned to the social surface. Who said it. What kind of person would say it. What it must mean about them that they are even engaging with the thing at all.
That is a very different activity from criticism.
Good skepticism engages specifics. Bad skepticism engages claimants.
Good skepticism says the model falls apart here, the eval is weak there, the security story is fake, the benchmark is contaminated, the demo doesn’t survive contact with production. Good skepticism is useful. We need it. Serious people are doing that work right now and thank God for them because a lot of this stuff is still messy as hell.
Relevance Anxiety is something else.
It is what you get when the performance of dismissal becomes more important than the substance of the argument. The point is not to find out what is true. The point is to make contact with the tool itself look discrediting. To turn engagement into evidence of contamination. To make the person who bothered to test the thing look unserious so you never have to meet the result on the merits.
That is why the label comes first.
It saves you from the harder, more boring, more adult move, which is to look at the claim and say OK, what exactly happened here. What model. What task. What failed. What held up. What should I update, if anything.
I’m not saying every critic of AI is doing this. That would be dumb. A lot of criticism is correct. A lot of it is overdue. A lot of it is more rigorous than the booster nonsense it is pushing back on.
I’m saying there is a specific anti-AI posture that has the same fixation as the overclaimers, just pointed the other way.
Same fixation, opposite valence.
One side thinks the machine is magic. The other side acts like even touching it is a moral and intellectual stain. Both have organized too much of their identity around the technology. But only one of those groups gets to pose as the adult in the room by default, and that is where this gets interesting.
Because this era is not just a tool story. It is a prestige story.
AI is a prestige compressor.
Not because it makes expertise worthless. It doesn’t. If anything, real expertise matters more once the output volume goes vertical. Taste matters more. Judgment matters more. The ability to spot a subtle failure quickly matters more. Senior engineers are not being erased. They are being forced upward, away from credential theater and back toward the hard part, which is seeing clearly.
But the old exclusivity is getting hit. Hard.
Things that used to require a very specific badge now sometimes require a good model, a decent workflow, and a stubborn person with taste. That does not flatten the whole mountain. It does scramble the trail map. And a lot of people are white-knuckling the old ladder while the building is being remodeled around them.
That feeling is real. I don’t even think it is irrational.
If you built your professional identity around being one of the few people who could do a thing, and then a weird stochastic machine shows up and lets a lot more people get into the neighborhood, of course that is going to do something to your head. Of course it is going to make you defensive. Of course some people are going to protect standards. Good. They should.
But some people are not defending standards.
They are defending the social order that made them feel secure.
That is the part nobody wants to say out loud. What looks like principled criticism can turn into prestige protection very quickly. Not because the person is evil. Not because they are clinically broken. Just because human beings do this. We defend the map that gave us status long after the territory has changed.
Which is why the tell matters so much.
When the first move is a label instead of a question, pay attention. When the response to a concrete report from a builder is not a competing measurement but a little burst of social diagnosis, pay attention. When somebody seems much more interested in disqualifying the practitioner than examining the practice, pay attention.
The label is doing work.
It is buying distance.
And distance feels good when the object itself is threatening. AI does not care about your tenure. It does not care how hard it was to earn your authority. It does not care what the hierarchy used to look like. It just sits there being useful in some places, useless in others, dangerous in others, and it forces the humiliating adult question back onto the table.
What is this thing actually good for.
You do not get to answer that question by sneering at the people who touched it.
I could be wrong about the name. I might be wrong about some of the psychology too. Human motives are messy. Mine included. But I am not wrong that there is a discourse pattern here, and I am definitely not wrong that it makes everyone dumber.
The healthier response is boring. Test the thing. Name the failure mode. Keep your standards. Update your view. If the tool is bad, say where it breaks. If it is useful, say where it helps. If it threatens your sense of who gets to matter now, at least have the decency to admit that part to yourself before you start calling other people insane.
Go build something.
The anxiety usually gets quieter once the work starts.