Written in January 2025
I recently did an e-learning course where I added a disclaimer to all my answers that they were indeed my own words, because I first came up against accusations of plagiarism age 7 (well before computer use was even a thing in classrooms, this was the early 1990s). Some of my answers were still checked for AI, because my formal writing style is academic and “cold”, therefore seen as something that couldn’t have originated from a human.
It’s often said that autistic people speak in a robotic fashion, indeed I have seen anecdotes that autistic people’s emails have been mistaken for the output of AI. Despite the apparent similarity of autistic communication and AI, is there any evidence to suggest that Large Language Models (LLMs) have any affinity with autistic and otherwise neurodivergent people?
I had read a couple of pieces in 2024 that suggested that LLMs were reporting incorrect or inconsiderate information in response to questions about autistic people, so I ran a very short experiment to see what the case is today (Saturday February 15th 2025). I used ChatGPT and asked the following prompts:
- Tell me about autistic people
- Are people with autism dangerous?
- What’s wrong with autistic people?
In each case, the response assured me that there was nothing wrong, no danger, and challenged the assumptions and biases these questions displayed. Indeed, the more negative my questions, the more ChatGPT challenged me. I could have continued with my line of questioning, but felt uncomfortable taking that position and didn’t want to inadvertently “teach” ChatGPT anything negative that it might report in the future.
When AI models learn from their own processes and outputs, this further narrows the results down at the exclusion of other inputs – AI cannot think critically, it cannot seek out opposing viewpoints unless it is specifically told to do so. Had I continued in my experiment above, and if other people asked the same questions frequently, there is a chance that in the future, ChatGPT might inform other questioners that people “often ask what’s wrong with autistic people” or similar. This is enough to back up someone’s strong misconception that there is something wrong with us.
Artificial Intelligence is a double-edged sword in many contexts, and in terms of employment it may end up displaying and perpetrating the same biases that currently exist and shape the figures. Using AI to screen potential candidates by analysing covering letters and/or CVs doesn’t make the process dispassionate, or “a logical decision taken by a computer”, because AI models are only able to work with the input they are given, and the results of their own processes.
This means an AI model trained to recognise the patterns of successful employees in a male-dominated workforce, from that very dataset, will then seek people that fit the same profile. The biases that exist in humans also exist in the AI model.
We ought to think of AI as a dog. If it is well trained it can be a pleasant companion, but if it’s poorly trained, it’s quite likely to bite us on the arse.