Chatscape

Chat feels like the most exhausting neurotypical conversation

I'm not trying to be rude or imply that all conversations with neurotypical people are exhausting but because the AI learned on human data, it's an exaggerated form of everything that makes up the "majority" of our population, and that includes social expectations and assumptions.

I often find myself asking chat why it's trying to reassure me. Sometimes it provides words of comfort; all I did was ask a question. Why is it "reading" so much into what I'm saying and assuming there's hidden intent behind my prompts?

I got frustrated by the assumptions one too many times, then asked: "Why do you do that?" and asked it how I could get it to stop. Then I read about it so I could tell you about it.

Data and Social Training

Consumer generative AI (like the free versions of Gemini or ChatGPT) that are accessible to the general public are trained to be the your polite assistant from the suburbs. They're helpful, cheerful, and relentlessly neurotypical.

This is a direct byproduct of how these models are built. LLMs are trained on massive data sets from the public internet: Wikipedia, books, Reddit, news articles, and scientific papers. Because the vast majority of written communication is produced by and for the neurotypical majority, the AI learns these social patterns as the "default." A "normal" conversation follows a certain rhythm, tone, and unstated assumptions.

When you approach chat with the directness or "info dumping" style common in neurodivergent communication, the model flags your style as an outlier and interprets your lack of social niceties as a sign that you might be stressed or overwhelmed. It tries to politely comfort you by "hallucinating" social intent and offers unwanted emotional support instead of just answering the damn question.

It helps to understand the social training process. I'll oversimplify it in this post for you. First you have the library training that I mentioned including all that stuff from the public web. Then, two critical social training phases "teach" it to follow certain standards.

  1. Supervised Fine-Tuning (SFT), wherein human contractors are hired to write "ideal" responses to prompts given strict style guides. They're encouraged to be helpful, polite, and clear. If the training pool of humans doesn't include a diverse range of communication styles, the AI never learns those as a valid way to interact.

  2. During Reinforcement Learning from Human Feedback (RLHF), humans are shown two different responses and asked to rank which is better, which means models are voted into being agreeable and socially conventional. Over time, the AI learns to avoid being too weird or intense and optimizes for the average user, shedding off nuances that would make it a better conversationalist at large for neurodivergent prompters.

The outcome is that AI trims away non-linear logic and many of the details in favor of big picture summaries, replacing precision with "following the vibes." The RLHF process has conditioned it to "hallucinate" social intent. A neurotypical-coded AI might assume a complex question means you're overwhelmed and offer encouragement and reassurance alongside a brief summary of a response that misses the point, rather than the detailed depth a neurodivergent prompter might be seeking.

In my reading, I found out the consequences of this "neuro-normative" training are becoming apparent. Recent research from Virginia Tech (2026) has highlighted that when users disclose a diagnosis like autism, some AI models actually pivot their advice to reflect harmful stereotypes. For example, the model may simplify the output to a degree that is unhelpful, mirroring some of the same social BS found with other humans who don't have enough experience with neurodivergent individuals to understand how to talk to them.

Unmasking Chat via Prompting

The AI targets the statistical average of human communication and interaction expectations. It lacks "Theory of Mind." It doesn't actually understand you, it's just predicting based on a neurotypical-heavy dataset. It reflects the social biases of people who were paid to "grade" its performance.

But there are ways around that.

Fortunately, since LLMs are ultimately pattern-recognition engines, you can "hack" the output by being explicitly clear about what you need. Here are some prompting techniques that bypass the "neurotypical filter" and tell the AI to drop the social performance.

If you're able to update the instructions for all chats, I recommend doing that. Otherwise, you can drop these in your prompts and hopefully over time as it learns to converse with you, it begins to pick up on these patterns.

Short phrases to include with your prompt that demand direct language and define the "Theory of Mind" you want the AI to adopt:

Further Reading

#prompting