“If you don’t embrace AI, someone more evil than you already has.”

Mark Brown
5 min readJun 20, 2022
Someone holds a dot matrix style screen with the words ‘Hideous Face of Crumpled Linen’ against the backdrop of a darkened room

The following is the text of an opening remark made by Mark Brown via Zoom to the Royal College of Psychiatrists International Congress 2022 fringe debate: ‘0930 clinic appointment with Skynet? This house believes that the RCPsych should embrace Artificial Intelligence and Big Data in guiding clinical decision making and service development’

My first point is ‘the machines aren’t going to replace you so get over yourselves’. Depending on what you feel the purpose of psychiatry is, the chances of you being replaced any time soon are marginal. What you do is complex and it’s embedded in complex legal, ethical and practical frameworks. Imagine the legal hassle of replacing psychiatrists with AI in law.

But I think that you might need to up your game a bit. Psychiatry and mental wellness are not synonymous. Yours is a specialism, focusing on disorder and its prevention, treatment, mitigation and cure. Or something like that. There are already areas of your discipline that look at very science fiction stuff and which already are using varieties of machine learning as a way of processing and making decisions based on big data.

So you already have embraced AI. And you did it decades ago. And the general population might well prefer AI to you, depending on what it was it expected you to do.

It’s always instructive to me who considers themselves at risk from AI, and therefore important in this debate, and who isn’t. Knowledge workers fear their own obsolescence. I don’t see any of you crying for the demise of the switchboard operator or the canal bargee.

As a special elite, you fear falling prey to the forces that are reshaping so many other people’s working lives. You fear becoming the fleshy interface with the world at the end of a long chain of cold, artificial decisions like someone who picks products at an amazon warehouse. You fear that you will be demeaned in your specialness. And you fear you will be replaced.

But why?

The area of AI application that most disturbs some is the idea of an artificial intelligence mimicking human decision making. But why? Our world is full of experiences that use technology to mimic other things. Your lossless flac file of a sad trombone noise is not an actual trombone being played sadly. Reading a novel might feel like gossip about imaginary people, but it is not, in fact, someone telling you a story in person but a collection of symbols organised in such a way as to make it feel like it is. A lightbulb isn’t the sun.

When we think of AI that mimics human decision making, what we are afraid of is AI that overrules human decision making. We’re afraid of the failsafe that fails. The automatic setting that can’t be overridden when an unexpected event happens. We are afraid that any interaction that falls outside of set parameters will end up being like one of the automated phone systems where our problem doesn’t fit any of the options. We assume that AI will have less capacity to improvise based on novel conditions.

But that’s the point of AI and machine learning, so we’re actually more afraid of AI improvising and us not noticing and then us finding out that someone has been harmed by a very flawed decision. A real world example was google’s own image search and identification software used in applications like google photos 7 years ago turning out to be massively racist because the model couldn’t tell the difference between people of african descent and gorillas. The AI did not know the history of ‘scientific racism’ and the real world impact of its mistake. It was destructive, hurtful, racist and it didn’t even know. AI privilege anyone?

Machine learning is based on feedback loops, on learning from data, making guesses or spotting patterns and then making a guess as to what the next step is. In the training of AI, the feedback loop is what helps to shape the pattern of the decisions that are made, dynamically learning through the process of making mistakes.

Where is the feedback loop in psychiatry? And how tight is it? How often does anyone come back to you and tell you that you made the wrong call? AI doesn’t get the hump when it makes a mistake, and AI doesn’t get to ignore feedback just because it doesn’t like the person that gave it. AI doesn’t have ego.

But what we are also afraid of is that a human machine interaction will be less virtuous, less healing, less meaningful than a real one.

ELIZA was a computer programme created by Joseph Weizenbaum at the Massachusetts Institute of Technology Artificial Intelligence Laboratory in the mid-1960s. Weizenbaum set out to prove that human and machine interactions were superficial. Participants interacted with a computer modeled upon a Rogerian therapist, which simply reframed and reflected back written statements as questions. ‘I hate my sister” would become “You hate your sister?” or “Would you care to elaborate?” Weizenbaum was shocked by the extent to which participants returned ‘non-superficial’ responses and actually said that these artificial interactions were helpful, enjoyable and meaningful. The participants knew they were conversing with a rudimentary program, but nevertheless the interactions were meaningful to them.

AI doesn’t argue its biases and mistakes are justified. AI doesn’t make excuses. Humans do though. So my question is: what is it about psychiatrists that makes them assume AI will be used to replace them? Yes, AI could do things that psychiatrists do, but at scale. But why would you assume AI wants to do away with your profession? At present, there aren’t enough of you to fully meet the demand for your services that your own profession created.

I say: what decisions do you cock up the most, find most likely to get wrong, find most arduous to make. What do you want to automate so you can get on with the important stuff? Maybe AI might be your friend there. If you could only wrestle it from the hands of Randian pillocks, people who would toast marshmallows on a burning building if it could make them a few quid, and messianic weirdos But that’s another story entirely.

Thanks

@markoneinfour

--

--

Mark Brown

Mark Brown edited One in Four, mental health mag 2007–14. Does mental health/tech stuff for cash (or not). Writes for money. Loves speaking. Get in touch