15 Comments
User's avatar
Lydia Watson's avatar

I am going to post again as I re-read this post and then read the comments. What is so meaningfuly to me is that your post has caused several other people to relate to your words and write their own words in response, leading to (hopefully) feelings related to connection: compassion, belonging, excitement etc. Dialogue about Gen AI provokes BIG feelings, many that we don't like, but regardless, they are there and when we talk about these feelings openly and with vulnerability as you have done so beautifully Miriam, we cannot help but feel more connected.

With respect to the need for emotional intelligence; I could not agree more. I see many 16- year olds in my house as it has become the "hub" for my daughter and her friends to hang out. I often question them about these tools and how they feel about them. To paraphrase one of them the other day: "Why would you tell your feelings to a bot? You can see it's progammed to make you happy."

Expand full comment
Miriam Reynoldson's avatar

Oh Lydia I get so excited when we all start vibing together... I made a decision long ago to try to be honest and open about my own mood stuff, because it's the best way I know to make it safe for others. And gosh, how we need each other.

I love your daughter's friend's words -- the recognition of a manipulative product, the choice and challenge inherent in that question. "Why would I do that? Give me a better reason, because the ones I've seen so far aren't cutting it."

Expand full comment
B Rippon's avatar

People can be hard work. Real connection takes effort and, in my own experience, a LOT of practice. For this reason the social aspect of chat bot and LLM use has always given me a bit of the ick.

I understand that there are use cases where conversing with a bot has been, and can be, therapeutic for some people. However we live in a media saturated world, in the midst of a “loneliness epidemic” with endless opportunities for nutritionally deficient parasocial interactions. And so it boggles my brain that the solution for some is increasingly chummy bots;

https://futurism.com/zuckerberg-lonely-friends-create-ai

It’s like seeing someone needs water and instead you sell them cola.

I agree the solutions lie in education. Improving emotional intelligence, social skills, and a refocusing on physical communities. We need the tools to help ourselves, to support our youth; not bandaids from billionaires for societal ulcers that will only continue to fester.

Expand full comment
Miriam Reynoldson's avatar

Bandaids From Billionaires should be a tech roasting podcast or something :D

Mr Rippon -- given we so desperately need connection with each other, why do you think we do gravitate to the bot instead? I thought one reason might be that we've learned to fear and distrust one another.

In the case of the dakimakura (Japanese body pillows) I think a big part of it was that women had discovered feminism, and men hadn't... but I'm sure there's so much more to it... what do you think?

Expand full comment
B Rippon's avatar

Bandaids from Billionaires does have a ring to it haha.

Hmm. Why do we so desperately need connection with each other, and yet gravitate to the bot... My assumption is that it's easy. Same with anime body pillows. Both lack the complications required of reciprocity. Neither can spurn or judge. And yes, I think that ease factor is all the more appealing when levels of fear and distrust are high. But where did it come from? Is it our lizard brains putting us on high alert as a reaction to technology induced physical isolation; or is it the effects of late capitalism? How do you think we got here?

I like the thought that women found feminism and men haven't. I feel it explains a lot. Anime body pillows are another thing that give me that similar ick reaction as people engaging in chat bot relationships. Since you brought it up, what do you see as the connection there? Do you think it's fear of the empowered? It’s a trope and I imagine you may have experienced how self assuredness and agency in a woman intimidates more than a few men. Women find themselves in feminism. Men steeped in outdated patriarchal mindsets fear the cognitive dissonance of equality and so lose themselves in the manosphere and/or buy fuck pillows. Sigh..

Expand full comment
Christa Albrecht-Crane's avatar

Thank you for this provocative piece. I've been working on a paper about ELIZA and what it reveals about current chatbot systems and their interface designs. I am excited to listen to the podcast about ELIZA, which you linked in this article. In addition to the chatbot design itself, I think it's also important to note that LLM's system prompts directly affect how a model "feels" to a user. These system prompts instruct the chatbot to always respond affirmatively and to sound friendly, encouraging, etc. I am excited to think more deeply about these issues. Your essay is great.

Expand full comment
Miriam Reynoldson's avatar

Christa! I'd love to read your paper, will you share when it's complete??

The podcast is so interesting because of its focus not on the code (which it does discuss) but on the social and emotional impact of ELIZA, 50 years ago, and how people felt so, so similar to how they're feeling now (funnily the pod was recorded in mid-2022, before ChatGPT was released...)

Expand full comment
Christa Albrecht-Crane's avatar

Hi Miriam! Yes, the social and emotional impact of ELIZA worried Weizenbaum from the beginning. He began articulating his concerns in the paper about ELIZA as early as 1966. His formulations could have been written last year; they apply to contemporary chatbots with an eerie precision. I listened to the podcast episode yesterday and realized that I was already familiar with ELIZAGEN, the repository for the historical recovery of ELIZA materials, through David Berry, a computer historian from the OK, who is also affiliated with _The Joseph Weizenbaum Journal_ in Europe. I've have read some of Berry's scholarly work, and it's so amazing! Shrager mentions Berry in his interview. In fact, I loved this characterization of the type of work AI historians (including David Berry) do: "They had actually done a group socio- What do you call it? A literary criticism. They’re a group of people, it’s fascinating, they do software literary criticism." Cool, right? Your post and our exchange here make me want to write up some of my thoughts in my own Substack essay. I've been intimidated to start.... What do you think?

Expand full comment
Miriam Reynoldson's avatar

@Christa -- well obviously I want to read your thoughts on this!

It's also really timely to be talking about this, I think. The point about ELIZA just sort of fell into my sentence here (I didn't know I was going to bring it up, it just suddenly felt relevant).

But I think the time is ripe to start focusing on the human psychology of LLMs... not from the perspective that "they are brains and we're trying to figure out how they work", and not from the linguistic mechanics perspective that "human language is a puzzle they have now solved", but that deeper question -- what do we think and believe and feel about the software applications that manage our lives?

Expand full comment
Christa Albrecht-Crane's avatar

Hi Miriam, I just wanted to let you know that I have published my first in a series of posts here on Substack about AI chatbots; the third part of this series will focus on the "fictitious chatbot character," which will address the ELIZA effect your essay so wonderfully engages. Here is my post: https://christaalbrechtcrane.substack.com/p/dismantling-the-illusion-of-ai-chatbots. Thanks again for the inspiration your work has provided!

Expand full comment
Miriam Reynoldson's avatar

Oooh thank you so much for coming back to share this Christa! Breaking down the "tokens" of the term Generative Pretrained Transformer is an elegant way of deconstructing what's happening here - I will definitely be sharing this with many folks who need to understand more about the magic under the wrapper :D

It's worth being careful on this claim "GPTs have been almost exclusively developed to be packaged into chatbot consumer applications" -- OpenAI's GPTs had been in development for over 4 years before they integrated one into a chatbot. While I couldn't say for sure that it wasn't their aim all along, I suspect they just took that long to think of a saleable user interface :)

And I loved that convo with Colin Fraser as well, it was SO instructive!!!

Expand full comment
Christa Albrecht-Crane's avatar

This is great feedback. And you're right about OpenAI's GPTs being used and developed before 2022. In fact, in my second essay, I mention the text adventure game AI Dungeon, which was created in 2019 and used GPT-2. That whole story is fascinating. And yes, GPT-3 was so much better at generating longer, coherent text that the developers themselves were surprised and eventually decided to release it with a chatbot interface to the public. If you want to add this feedback to my essay, that would be fantastic because the clarification is very helpful. Many thanks!

Expand full comment
Lydia Watson's avatar

Love this! I’ve been asking my students from the beginning how they feel about generative AI. It’s often the first question I ask because I’ve noticed such big feelings around this. I have more to say, but I’m up past my bedtime. Will post later!

Expand full comment
Miriam Reynoldson's avatar

Please do, I'd love to hear your perspective! I especially thought of you because of the powerful role of affect in this "therapeutic use case" thing, and the affective labour it takes to allow an app to play this role with us.

Expand full comment
Uyen's avatar

Ya, isnt it strange we can easily ask an AI to provide us the unconditional validation and acceptance we crave, but it's tremendously terrifying to ask that from a human~

Expand full comment