when our minds are in the machine,
is there room for lived experience?

arbitrary units


The hottest new research in psychiatry is probably one you’ve never heard of: computational psychiatry. Computational psychiatry (CP), like critical psychiatry, began as a response to the failures of psychiatry thus far – but the similarities stop there. According to CP, the reason for these failures is a lack of ‘objective’ measures. Therefore, CP argues that psychiatry needs more objective measures in the form of machine learning and computational models. CP stems from the premise that data is objective – the sacme ideology as the research domain criteria (RDoC), a priority shift in 2008 by the National Institute of Mental Health away from the Diagnostic and Statistical Manual of Mental Disorders (DSM) towards data-driven understanding of ‘psychiatric disorders’. Since the first paper on CP in the early 2010s, the field has received tremendous investment and resources, resulting in many new research centers, conferences, and researchers flocking to this work. Late last year, for example, a research team at Duke University was awarded 12 million dollars to ‘use artificial intelligence to detect autism’.

Critical psychiatry of course shares an urgency to dismantle the DSM. However, while mad scholars condemn the DSM, and psychiatry more generally, for pathologizing lived experience, CP averages and abstracts away lived experience entirely, all in the name of objectivity. On this basis, I argue that CP research is both inaccurate and unjust. Writing as an autistic, neurodivergent researcher with lived experience of depression, anxiety, C-PTSD, childhood adversity, and more, I will critique CP on epistemic grounds: these models are at best inaccurate and thus will not advance understanding. More importantly, I will critique CP on ethical grounds: by failing to consider the lived experience of mad/neurodivergent individuals, CP research is invalidating and therefore unjust. I will describe the ethical issues of the research and data practices used in CP, and why objectivity is a myth. Looking forward, I will outline how we might shift away from CP toward a more liberatory, user-led research agenda that centers lived experience.

What is computational psychiatry?

Computational psychiatry (CP) largely follows one of two forms: ‘data-driven’ machine learning or ‘theory-driven’ computational simulation. The goals of CP include detection (of disorders, compared to ‘healthy controls’); prediction (of future diagnosis, prognosis, relapse, ‘risk’, etc), or disambiguation (between clusters of disorders that may show comorbidities).

Data-driven computational psychiatry

Data-driven CP can take many forms, but generally uses ‘machine learning’, ‘artificial intelligence’ or ‘big data’ – fancy ways of describing the same concept. Machine learning models are ultimately variations of linear regression; they take in a lot of data as input and try to ‘learn’ a relationship between input data and outcome measures (such as symptom onset). A researcher might try to ‘detect autism’ by analyzing brain scans, video data, voice recordings, or all three. Another researcher might try to ‘predict suicide risk’ using people’s social media posts, depression scores, sleep quality, or all three. Examples of input data include self-report questionnaires (like the Beck depression inventory), ‘user-generated content’ (like social media posts), neural data (like fMRI, EEG, PET), genetic data, other biological measures (like heart rate, accelerometer data, eye tracking data, face and voice data, blood tests), and behavioral responses on tasks developed by researchers. Each of these data sources has its ethical issues associated, but specific issues will not be discussed for brevity.

Theory-driven computational psychiatry

Theory-driven approaches in CP, on the other hand, follow a paradigm of computational cognitive modeling to simulate cognitive processes and behaviors using math, algorithms and code. These models are typically, if not always, determined by researchers without consultation with or even reference to lived experience. In one version, researchers build entirely theoretical computational models based on what they think mad/neurodivergent experience or behavior is like. As a result, CP simply reinforces existing biases, for instance, stereotypes that autistic individuals are ‘bad at theory of mind’, or that hallucinations are only visual or auditory in nature. In another version, researchers construct data-driven computational models by fitting theoretical models to behavior on tasks, and comparing which models fit best in participants with a ‘psychiatric disorder’, perhaps compared to ‘healthy controls’. Differences in these parameters or models are then used to make conclusions about differences, usually perceived deficits, in the mad/neurodivergent group.

What’s wrong with computational psychiatry?

Misguided goals

As a branch of psychiatry, CP succumbs to the problematic assumptions of psychiatry in general: that madness/neurodivergence is a ‘defect’ that should be cured or treated, that mad/neurodivergent people want to be cured or treated, and that anyway, their subjective experience and desires are less valid than what a clinician or researcher thinks is true or desirable. CP researchers need to realize that their goals are usually not shared by the mad/neurodivergent community – and so we should interrogate whether the project of CP is worthwhile at all. CP research should not be taken lightly, not least because we operate in a society that involuntarily incarcerates or calls the police on those perceived ‘mentally ill’. By brushing past lived experience, CP is not only invalidating, but can be deadly.

Garbage in, garbage out

Another major problem with CP is a problem with machine learning in general: “garbage in garbage out”. That is, if the input data used to create these models are biased, the models are biased – and in CP, the data are definitely biased. For one, data-driven CP models often take diagnosis as ‘ground truth’, ignorant of the biases in diagnosis and misdiagnosis that are acute for Black people and gender minorities. CP also inflates the self-selection bias of research participants: the subsets of mad/neurodivergent communities who have the time, energy, desire and trust to participate in research. As a consequence, a dataset of autistic participants used to develop CP models might actually consist of mostly cis, white men, and will leave out others who (rightly) might not trust research or researchers. Even more strikingly, as mentioned above, sometimes CP researchers don’t even use real data to develop their models, and simply rely on their impressions of mad/neurodivergent experience. This approach is bound to be wildly inaccurate and will reflect caricatures of lived experience rather than considering, for instance, that there are gender differences in autism presentation. As a consequence, CP is at best inaccurate and unlikely to generalize. Indeed, researchers have reported poor reliability of CP model parameters, and no wonder.

Compensation, consent and consequences

Further, the research and data practices used in many areas of CP are cause for concern. These issues are not unique to CP, but can be applied to psychology and machine learning research in general. On Amazon Mechanical Turk, a popular website to recruit participants, people participate in research for little more than cents. If participants need to work for next to nothing to survive, research participation is essentially coerced. Even on well-paid recruitment platforms, it is debatable whether informed consent is really informed, particularly when researchers collect GPS, movement, heartrate, and other such data. Worse still, many people participate in research for free, hoping that this research, and their data, might be valuable. This eagerness is unfortunate if, as I believe, the promises of CP are oversold and the risks of data misuse are high. 

Although participants may superficially consent to the large-scale data collection common to data-driven CP, the vast potential of this data means that even researchers don’t yet know exactly how data might be used or whether personal information can be de-anonymized. The top conference in machine learning now mandates that researchers submit a research impact statement. This is a good step, but the future uses or misuses of research often stretch far beyond the researchers’ imagination. Participants cannot be expected to truly consent to what they do not know, yet they also have no transparency into the research process after they finish an experiment, nor any power to stop or correct its course.

Lastly, some CP research ignores consent altogether: ‘mining’ so-called ‘user-generated content’ (e.g. tweets, facebook posts, reddit comments) without users’ knowledge. This research is permitted because this data is considered ‘public’, but when researchers trawl for suicidal tweets, it just feels wrong to not ask for consent. CP researchers need to stop and think – if users knew what they were doing, would they want them to do it? These unsettling practices are all in the name of ‘objectivity’, where ‘real world’ data covertly analyzed by researchers is assumed to be more valid than someone telling them what their experience is like. This misconception, I believe, is the crux of the issue.

The myth of objectivity

Finally, a fundamental flaw of CP is its premise of ‘objectivity’ as a guiding principle. In fact, objectivity is a myth. The promise of objective measurement originated from sciences that studied physical realities of the universe, but extending this paradigm to study people is problematic. Given the gender and racial biases in diagnosis mentioned above, a psychiatrist’s evaluation of someone could result in different outcomes based on their perceived gender or race. Measurement, therefore, is not objective – so how could data be? As MC Hammer said, “when you measure, include the measurer.” 

CP researchers might argue that this example demonstrates the subjectivity of old-school psychiatry, the very issue they are trying to address by proposing more objective measures. However, there are just as many ways in which CP research is also in the eye of the beholder. In theory-driven CP, researchers make subjective decisions about the algorithms and parameters that define their computational models. Even if data is used to fit and decide among models, the models under consideration are still limited to the researchers’ imagination. Finally, as mentioned earlier, sampling and self-selection biases of research participants mean that the vast majority of CP research relies on biased samples. To this extent, CP does not produce objectivity – it only stamps in existing biases. A long line of feminist science scholars have argued there is no ‘view from nowhere’; not only, then, is research not objective, it is specifically subjective according to a dominant viewpoint of power and privilege – the gaze of a white, cis, neurotypical, ‘sane’ man. We should stop thinking that objectivity, even if it were possible, has some lauded status over subjectivity. In fact, feminist standpoint theory argues there is a privileged position of a marginalized or outsider standpoint, that actually provides more insight and accuracy than the ‘dominant’ standpoint. Once we acknowledge that all research is subjective, we can be liberated.

Nothing about us, without us

Freed from the myth of objectivity, I argue we should center lived experience. The issues above only exist because no one consulted mad/neurodivergent individuals on whether the research agenda of CP was valuable or desired; whether people wanted their data used in this way; and whether people’s lived experience aligned with researchers’ assumptions. There are reasons for this omission. First, researchers who exalt objectivity consider subjective experience to be unscientific. Second, research is perceived as ‘biased’ if it comes from the mouths of people who have a stake in research about them (as if that were a bad thing). And third, mad/neurodivergent lived experience in particular is often considered unreliable, since it includes beliefs or sensory experiences that others consider unusual. There has been much written already about the importance of lived experience in research, and I will refer the reader to existing work for a full argument. In the context of CP, I argue that lived experience is not only important because of justice reasons, but also because, without it, research will forever be flawed – and thus, in my opinion, a waste of resources.

Towards a more liberatory research agenda

So, what’s next? I argue we should look to participatory research, an umbrella term for research that in some way includes affected communities. There are many excellent resources on this topic, including Decolonizing Methodologies by Indigenous research scholar Dr Linda Tuhiwai Smith. One way to summarize the diverse forms of participatory research is from Participatory Neuroscience: Something to Strive For?, which describes a five-tier scale of community involvement in research (see diagram).

On the lowest rung the research agenda is entirely dictated by researchers, with all power and decision-making concentrated in their hands. CP research is currently at this stage. One rung up, we find citizen science, where communities get to participate in the research process, such as asking them to collect data in their local communities. On the third rung lives community consultation projects, where researchers actively solicit feedback from communities, but ultimately researchers have the final say whether to incorporate this feedback. Fourth, we find community-based participatory research (CBPR), where researchers lead projects but communities share power in, for instance, deciding research directions or terms of data ownership. Finally, the highest rung is my vision for the future: user-led research that is entirely directed by communities, such as in the case of survivor research

Paltry participation is no better than tokenism, and my suggestion is not to superficially poll a sample of mad/neurodivergent individuals that are likely to be privileged white men whose experiences may vastly differ from racial and gender minorities. Instead, we can do much better, and we should. Ultimately, user-led research might mean that CP should not exist at all. That is up to the people to decide; it is not my place to assume what the community wants – that would be no better than the research I have been critiquing. For the moment, I hope to describe the ways in which CP research thus far has been misguided, unjust, and destined to be inaccurate. Although a young field, even CP researchers acknowledge it has not met its promise. Time will tell whether CP can be rescued, or whether it is doomed to fail.

Ultimately, user-led research might mean that CP should not exist at all. That is up to the people to decide; it is not my place to assume what the community wants – that would be no better than the research I have been critiquing. For the moment, I hope to describe the ways in which CP research thus far has been misguided, unjust, and destined to be inaccurate. Although a young field, even CP researchers acknowledge it has not met its promise. Time will tell whether CP can be rescued, or whether it is doomed to fail.


Arbitrary Units is a psychology researcher.


Leave a Reply

Your email address will not be published. Required fields are marked *