Conversational Systems as Simulations of Intentionality: Can Chatbots be Neurodiverse?
In this essay, I analyse the simulation of social interaction in modern conversational systems utilizing neurodiversity studies’ critique of intentionality. I contrast this with my own work on “Morkkelibot”, a machine learning-powered chatbot that is based on exchanges with a real person.
I wrote this piece for the Systems of Representation course that I took during the spring semester of 2023 in Aalto University. In retrospect, the text seems underdeveloped, but it does feature some early formulations for ideas that I have continued working on in subsequent projects. In certain ways, it lays out the theoretical building blocks for my master’s thesis, Neurodiverse Rhetoric with Conversational AI, which will be out soon. Although the text is old, unpolished and I no longer stand behind all the views expressed in it, I thought it might prove interesting to some. Please note that the text was written before the course exhibtion, so the presentation of the work, as shown in the pictures, slightly differs from the description in the text.
Introduction

Only a few years ago chatbots were mostly used as tools for automating simple customer service tasks, like booking appointments. Now generative artificial intelligence has made them more adaptable and generalized; when prompted, ChatGPT will attempt to answer almost any question or produce almost any kind of text.
While these conversational systems don’t masquerade as humans, they mimic how we think humans interact. Their so-called communication skills often get described as human-like (Locke, 2022, Browne, 2023), but what does human-like communication mean in this context? As ChatGPT was trained by Kenyan workers (Perrigo, 2023), it would indicate that some rules of social engagement don’t emerge purely from the training data and instead had to be purposefully instilled with manual labour. “Human-like” features needed to be added according to the wishes of the people who created the system.
Chatbots can be viewed as simulations of conversation. A simulation is an “imitative representation of the functioning of one system or process by means of the functioning of another.” (Merriam-Webster, n.d.). Simulations have technical and budget constraints, so the makers of a simulation need to prioritize what gets simulated. As Frasca (2003) states, “in other media, such as cinema, we have learned that it is essential to discern between what is shown on the screen and what is being left out. In the simulation realm things are more complex: it is about which rules are included on the model and how they are implemented.”
What is intentional behaviour?
One of the key motifs in neurodiversity studies is the critique of narrow interpretations of intentionality (Yergeau, 2013). As Yergeau (2017) summarizes, “intentionality encompasses both the process of inference and the physical action of communicating or making that inference known”.
Intelligence is traditionally defined as the ability to adapt in changing circumstances (Spearman, 1927). All intelligent behaviour doesn’t need to convey intentionality. ChatGPT and its likes can simulate behaviour traditionally viewed as intelligent, and through moderation and training they have been made to appear more consistently intentional in how they communicate.
While ChatGPT cannot initiate conversation and is programmed to obey instructions, it can i.e., refuse to speak of certain topics. This was achieved with a human-in-the-loop system where workers rated the best of multiple answers, guiding the system to produce ones that were more pleasing. This was done to steer the system away from harmful use-cases, not only to make it more “human-like” (Schulman et al., 2022).
The moderation built into ChatGPT becomes apparent in its Janus-like two modes of conversation: on one hand, it can appear to be very sure of itself and authoritative, almost to an oracle-like extent. On the other hand, it will apologize for giving misinformation and state that it is limited to its training data.
Thinking of characters as assemblages
Input: “How are you?”
ChatGPT’s answer: “As an artificial intelligence language model, I don’t have emotions in the human sense, but I’m functioning properly and ready to assist you with any questions or tasks you may have. Thank you for asking! How can I assist you today?”
Morkkelibot’s answer: “Var. VIII of the Morkkeli theme: Cappriccioso molto; pesantissimo e affettuoso. Lugubremente ma non troppo vivace.”
Morkkelibot’s quote is in Italian and without context it will seem to like nonsense. If you keep asking questions, though, eventually you will discover that the character of the chatbot plays the piano. He is describing his moods as variations of a theme using musical terminology.

In the “Morkkelibot” project, I “resurrect” my best friend from junior high school (from ages 13 to 16) as a chatbot. With his consent, I went through hundreds of emails we exchanged during that time. I edited and adapted the texts to fit a more conversational style. Unlike ChatGPT, Morkkelibot is not generative, it repeats prewritten answers word-for-word.
The editing process also entailed making up model questions for each of the answers the chatbot would give. The machine learning system tries to find the closest match between the user’s input and the model questions and then chooses the right answer based on that. This process of recontextualizing my friends’ emails was authorial, because I had to imagine questions that could have sparked the reactions he gives. Even though I tried to edit my friend’s texts as little as possible, there was no way of transferring them to the simulation realm without some narrative writing. The process of creating the chatbot resembled the creation of a fictional character.
In A Thousand Plateaus, Deleuze and Guattari (1987) present their idea that the things we say are formed in “assemblages”. “There are no individual statements, only statement-producing machinic assemblages. We say that the assemblage is fundamentally libidinal and unconscious. It is the unconscious in person.” They continue to state that these statements are “bizarre, truly the talk of lunatics”, because communication is always manifesting the unconscious. According to Deleuze and Guattari, human psyche is not deep, it has no separate conscious and subconscious layer.
This idea could be interpreted also as a critique of intentionality as the individual’s expression of intent behind their actions. In their view, the motivations of the individual are what they express, not what drives them to express. In fact, our personality is manifested through this process: “Each of us is caught up in an assemblage of this kind, and we reproduce its statements when we think we are speaking in our own name; or rather we speak in our own name when we produce its statement”.
In narrative theory, characters’ intentions are traditionally separated from their actions (Aristotle, c. 355 BC/1922). I have a background in screenwriting, where the motivations of a character are typically divided into a conscious “want” and a subconscious “need” (Cattrysse, 2010). While constructing Morkkelibot, I found Deleuze and Guattari’s idea quite liberating, because I could let the text speak for its own behalf without forcing a layer of motivation underneath it. This meant that while making up the model-questions for the answers, I started to focus more on the subject matter of the answers instead of imagining emotional states.
Taken out of context, Morkkelibot’s individual statements would seem to be the talk of a lunatic, but together they paint a fuller picture of the assemblages that my friend was caught in at that moment in time when he wrote the emails. This approach to narrative seems to emphasize the different political and cultural systems the characters engage with, and views them more as conduits for these systems than beings with agency. This led me to use also other sources for the bot’s text. In certain situations, Morkkelibot will quote Shakespeare, just as my friend occasionally would.
Morkkelibot also resembles a “statement producing machinic assemblage” on a more concrete level. It has only one layer, the responses it gives, and no hidden subconscious. Due to technical limitations, it is frozen in time. It cannot have wants and needs because it cannot develop as a character. The narrative element of the work emerges through interrogating the bot and finding more about it, not by watching the chatbot evolve.
Simulating diverse communication

Input: “Hello.”
ChatGPT’s answer: “Hello! How may I assist you today?”.
Morkkelibot’s answer: “Kugelschnörkel”.
The authoring process allowed me to play with neuronormative notions of intentionality in ways traditional conversational systems cannot. ChatGPT is optimized to minimize median error and will give different, but mundane answers every time, whereas Morkkelibot’s answers are always the same, but express the (existing) personality behind them. While ChatGPT’s answers adapt to the situation better, and convey intelligence, they do not convey intentionality in the same way as Morkkelibot’s. Humans do not always want to be relevant; we dodge topics, veer off on tangents and lie.
Morkkelibot can be frustratingly incoherent and can also appear to be unintentional on purpose. It is up to the audience if they want to view this misdirection as a playful challenge to explore it or as withdrawal or even hostility. The audience can engage in trying to guess Morkkelibot’s intentions if they so choose.
Of course, one needs to consider that Morkkelibot is meant to be an art object while ChatGPT is a commercial product. One could argue that the limitations of ChatGPT are the same kinds of limitations anyone working in customer service has; it always has to answer the customers’ questions and adapt to the information they give. We do not tend to question the intentionality of customer service workers.
As we discussed in the beginning, looking at the contents of a simulation is not enough. When critiquing a simulation, we need to look at the system itself, which in this case means looking at what gets defined by the system as communicative. Neurotypical discourses discard non-lingual forms of communication as arhetorical (Yergeau, 2015). While my material was limited to only emails, it still presented me with an opportunity to use language in a way that would normally be classified as arhetorical or even non-communicative.
ChatGPT treats language as a tool to convey information and prioritizes function over form. Morkkelibot, however, sometimes emphasizes the feel of words, giving the text a sound poem-like quality (McCaffery, 1978). “Kugelschnörkel” means nothing, but it conveys a feeling, especially when said aloud. Communication with Morkkelibot can be purely sensorial, and not based on the exchange of information in the traditional sense.
As a side note, I think that sound poetry is also more in line with how language models view language: to them language is more like a self-referential structure rather than a system for representing the world. Even though large language models present some emergent properties (Wei & Tay, 2022), they remain at the third level in Baudrillard’s (1994) classification: “they mask the absence of a profound reality.” In other words, they have no relation to reality, even though they pretend to have one.
Intentionality in the eyes of the beholder
Sometimes the appearance of meaning might be more important than meaning itself. As Trento (2021) states, when “there is no clear objective in linguistic or bodily language easily readable by neurotypicals, one assumes a lack of it.” This has become apparent when I have observed people engaging with the Morkkelibot in its current form, just as text on screen without any context. People often assume that it answers randomly, even though for the most part, I have tried to retain the context in which these answers were given by my friend. It would seem to me that when you do not assume the other to have intentionality, you will not find it. I do not think any of us would fare any better if our thoughts would be made into an object.
I have been thinking of having a version of the bot that requires two people who sit at screens opposite to each other. One person will see the input, the other the output. For them to engage with Morkkelibot, one of them would have to read aloud the text that comes from it. Would this change the so-called mental simulation (Johnson-Laird & Oatley, 2022) the viewers have of Morkkelibot? Would their idea of the intentions behind these words be altered just by having them communicate them? Would the sound poetry make more sense to them when they hear it?
Conclusion

GPT models can pass the Turing test (James, 2023), and the bar exam (Katz et al. 2023), among other things that have been previously thought to be possible only for humans. Rather than viewing this as a technological achievement, shouldn’t we think of it as a sign that the ways in which we have traditionally measured human behaviour have been wrong to begin with? While we create machines that can pass as humans, we at the same time are excluding humans that do not fit our criteria. Like Yergeau (2015) states, the medical paradigms of today “locate[s] disabled rhetorical moves within the domain of the pathological, rather than the cultural.”
The push for artificial intelligence has produced many mathematical models of what intelligence, or even “artificial general intelligence” could look like, but there seems to be no real notion of what a future with a multitude of different intelligences could look like. As Baudrillard (1994) says, simulation precedes reality. What we decide to model in the virtual world will dictate our future. More importantly, the ways in which we already simulate intentional actions are the accepted modes of behaviour in the real world.
Acknowledgments
My thanks to Lily Diaz-Kommonen, Cvijeta Miljak and Max Ryynänen from Aalto University for their comments and support in creating this essay, as well as my classmates from the Systems of Representation course and Begüm Çelik for helping with organizing the exhibition.
References
Browne, R. (2023, February 8). All you need to know about ChatGPT, the A.I. chatbot that’s got the world talking and tech giants clashing. CNBC. https://www.cnbc.com/2023/02/08/what-is-chatgpt-viral-ai-chatbot-at-heart-of-microsoft-google-fight.html
Locke, S. (2022, December 5). What is AI chatbot phenomenon ChatGPT and could it replace humans? The Guardian. https://www.theguardian.com/technology/2022/dec/05/what-is-ai-chatbot-phenomenon-chatgpt-and-could-it-replace-humans
Perrigo, P. (2023, January 18) Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/
Merriam-Webster. (n.d.). Simulation. In Merriam-Webster.com dictionary. Retrieved April 7, 2023, from https://www.merriam-webster.com/dictionary/simulation
Frasca, G. (2003). Simulation versus Narrative: Introduction to Ludology. In Wolf, Mark J. P. and Perron, Bernard (Eds). The Video Game Theory Reader (pp. 221–235). New York, Routledge.
Yergeau, R. M. (2013). Clinically Significant Disturbance: On Theorists Who Theorize Theory of Mind. Disability Studies Quarterly. Vol. 33. №4.
Yergeau. R. M. (2015) Occupying Autism: Rhetoric, Involuntarity, and the Meaning of Autistic Lives. In Block, P., Kasnitz, D., Nishida, A., & Pollard, N. (Eds.). (2015). Occupying disability: Critical approaches to community, justice, and decolonizing disability. Springer Netherlands.
Yergeau, M. R. (2017). Authoring Autism: On Rhetoric and Neurological Queerness. Duke University Press
Trento, F. B. (2021). An Inquiry on Post-linguistic Subjects in Twin Peaks: The Return. WiderScreen 24 (1–2).
Spearman, C. (1927). The abilities of man. Macmillan.
Sculman, J. et al. (2022, November 30). Introducing ChatGPT. OpenAI blog. https://openai.com/blog/chatgpt#OpenAI
Deleuze, G. & Guattari, F. (1987). A Thousand Plateaus: Capitalism and Schizophrenia.(p.37) University of Minnesota Press
Aristotle (1922) Poetics (S. H. Butcher, ed.) Macmillan & co. (Original work published c. 355 BC)
Cattrysse, P. (2010). The protagonist’s dramatic goals, wants and needs (pp.83–97). Journal of Screenwriting 1.
MacCafferfy, s. (1978). Sound Poetry — A Survey. In. S. McCaffery & bpNichol (Eds.) Sound Poetry: A Catalogue- Underwich Editions, Toronto.
Wei, J. & Tay, Y. (2022, November 10) Characterizing Emergent Phenomena in Large Language Models. Google Research Blog. https://ai.googleblog.com/2022/11/characterizing-emergent-phenomena-in.html
Baudrillard J. & Glaser S. F. (1994). Simulacra and simulation. University of Michigan Press.
Johnson-Laird, P. N., Oatley, K. How poetry evokes emotions. In Acta Psychologica 224.
James, A. (2023, March 29). ChatGPT has passed the Turing test and if you’re freaked out, you’re not alone. TechRadar. https://www.techradar.com/opinion/chatgpt-has-passed-the-turing-test-and-if-youre-freaked-out-youre-not-alone
Katz. D., et al. (2023). GPT-4 Passes the Bar Exam. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4389233
I wrote this piece for the Systems of Representation course that I took during the spring semester of 2023 in Aalto University. In retrospect, the text seems underdeveloped, but it does feature some early formulations for ideas that I have continued working on in subsequent projects. In certain ways, it lays the theoretical groundwork for my master’s thesis, Neurodiverse Rhetoric with Conversational AI, which will be out soon. Although the text is old, unfinished and I no longer stand behind all the views expressed in it, I thought it might prove interesting to some.