Is it all right to be wrong?
Why the greatest use case of generative AI is not knowing, but feeling.
I’ve struggled with depression a lot over the years, and I’ve seen quite a few clinical psychologists in my time. I can’t remember which one said this, but it stuck with me:
“Instead of trying to figure out what’s true, start asking yourself what’s useful.”
They didn’t mean that truth doesn’t matter. They meant that sometimes, when we’re mentally struggling, the truth doesn’t help. We can put all our emotional energy into trying to combat negative assumptions and distorted thinking, only to be left with a sense of helpless unhappiness despite apparently “knowing” we have no good reason for it.
Because feelings are not the same as thoughts.
Because we can’t think our way out of sadness, out of loneliness, out of hurt.
I’ve been reflecting a lot lately on the emerging data that suggests the number one use of generative AI is therapy/companionship. This includes conversation, friendship, romantic and sexual relationships, mental health counselling and frontline medical advice. These people aren't looking for information or truth — they’re looking for connection.
I’m not judging. I know what loneliness feels like. I know that people aren’t always around. And when they are, I know they don’t always help.
Large language models are terrible sources of information. That’s just… not in question. I will continue to tell you that LLMs hallucinate 100% of the time, every time, because they can’t see.
One of the greatest mistakes we ever made with generative AI was assuming that “producing information” ever was a use case. Yes, these models have ill-gotten access to an unfathomable quantity of digitised human data, but the probabilistic extrusion they do to it is not interpretation, not reasoning, not knowing.
But you know what they do do — really well?
They makes us feel good.
An LLM can make me feel like my questions aren’t stupid. An LLM can make me feel like I’m not alone. An LLM can make me feel like I’m making progress. An LLM is never too wrapped up in its own problems to hear about mine, never too busy with its own projects to help me out.
It’s not true, but it’s useful.
It’s wrong, but it feels right.
Please understand me. I’m not condemning this use case, and I’m not advocating for it. I’m just saying I get it. We’ve gone all the way back to the beginning, to ELIZA the Rogerian Therapist chatbot who we wanted to believe was real even when we were told point blank she was a bot. Because, you know, the way she made us feel was real. She made us feel heard — and we needed that.
I’ve written about this before, but my focus was on the anthropomorphisation of the apps, not on our reasons for being attracted to it. I want to focus less on the technology (i.e., on its developers), and more on our own behaviour and desires. We’re the ones creating this world.
What we need to do about this
As many of you know, I think “AI literacy” is utter garbage. It’s a buzzword invoked for political purposes, to cajole people into believing they have to get with The Program (which is whatever the person talking wants it to be).
But we do need to live in the world, and the (white collar) world is having a prolonged anxiety attack right now about generative AI. You know what that requires?
Not AI literacy.
Emotional intelligence.
We desperately need to start paying attention to how all of this is making us feel — because that’s what’s driving our actions. All of them. Everyone’s.
We are lonely. We are scared. We are ashamed. We are financially precarious. We are hurting each other, and that pain is making our pain worse. We’re also, most of us, pretty digitally-illiterate (we can send email but we don’t know the first thing about SMTP) and we still think digital technology is magic. And let’s be honest, that feels pretty good. A little bit of sparkle and wonder in a world that seems to grow dimmer by the day.
Pay attention to how generative AI makes you feel. And then ask yourself this.
How is it that we’ve co-created a world in which we need to take solace in companionship from bots?
Could we… not?
I am going to post again as I re-read this post and then read the comments. What is so meaningfuly to me is that your post has caused several other people to relate to your words and write their own words in response, leading to (hopefully) feelings related to connection: compassion, belonging, excitement etc. Dialogue about Gen AI provokes BIG feelings, many that we don't like, but regardless, they are there and when we talk about these feelings openly and with vulnerability as you have done so beautifully Miriam, we cannot help but feel more connected.
With respect to the need for emotional intelligence; I could not agree more. I see many 16- year olds in my house as it has become the "hub" for my daughter and her friends to hang out. I often question them about these tools and how they feel about them. To paraphrase one of them the other day: "Why would you tell your feelings to a bot? You can see it's progammed to make you happy."
People can be hard work. Real connection takes effort and, in my own experience, a LOT of practice. For this reason the social aspect of chat bot and LLM use has always given me a bit of the ick.
I understand that there are use cases where conversing with a bot has been, and can be, therapeutic for some people. However we live in a media saturated world, in the midst of a “loneliness epidemic” with endless opportunities for nutritionally deficient parasocial interactions. And so it boggles my brain that the solution for some is increasingly chummy bots;
https://futurism.com/zuckerberg-lonely-friends-create-ai
It’s like seeing someone needs water and instead you sell them cola.
I agree the solutions lie in education. Improving emotional intelligence, social skills, and a refocusing on physical communities. We need the tools to help ourselves, to support our youth; not bandaids from billionaires for societal ulcers that will only continue to fester.