First person: When technology talks back
From Clippy to Claude, when we give our technologies eyes and "I"s, it's impossible not to imagine them as having identities. This post reflects on the effects and affects of anthropomorphising tech.
Well everyone, it’s happening! Scientists have attached living human skin to a robot “face”. It’s not just laid or stretched over it, but literally anchored to the underlying structure through ligament tissue, enabling it to… smile. Look at this… amazing… thing.
I know. I know. I feel the same. But.
If something is presented in human “skin”, this moves us to respond to it with intuition and affect before economic rationality (System 1, for anyone into Kahneman). When presented pleasingly, it's a UX strategy to reduce friction.
In this way, we can see that anthropomorphism of technology (like generative AI) is largely a matter of behavioural economics. By activating our human propensity to ascribe human features to things that have eyes/Is, AI developers can invoke our emotive responses to lead us towards forming relationships with their products.
In the case of Microsoft’s Clippy (officially “Clippit”), those responses were notoriously negative ones.
Welcome to the valley of the (killer) dolls
Clippy’s main problem was that it wasn’t human enough. Despite having eyebrows and plenty to say, it never listened. It behaved as though it assumed Windows users needed help with everything, no matter how many times they had done it before.
But if human-like features are too close to real, they tend to fall into the “uncanny valley”: literally a valley in the graph of our comfort levels with various degrees of anthropomorphism in non-human things.
We get progressively less comfortable the more human something becomes while clearly still not being human — dolls, clowns, zombies and hyperrealistic sex dolls all fall into this category. (I personally feel the friendly droid on the graph above is a little bit creepy because of its Clara Bow eyebrows and blue irises, but YMMV.)
What’s interesting is that the current crop of chatbots are sidestepping the uncanny valley. Why might this be?
I think perhaps it’s because they don’t have almost-human faces or voices (with the exception of GPT-4o’s ill fated ScarJo imitation). They aren’t near-missing it. But they are producing a pretty spot-on simulation of real human text conversation. The moment OpenAI added a chatbot interface to its GPT-3 large language model, the world felt seen and fell in love.
This wasn’t the first time it had happened. Those who spoke with ELIZA (a chatbot programmed in the 1960s to behave like a Rogerian psychoanalyst) also felt seen, fell in love, and refused to believe ELIZA didn’t really care about their feelings, as they believed she did.
Meet Lucy
About 20 years ago, an English scientist named Steve Grand built a hideous little rubber ape-faced AI robot he named Lucy, deliberately “personified” in an unsettling way that prevented users from sliding into easy human interactions with it. She could see, hear, move and speak. And it was not pleasant to watch.
(Side note: when I originally read about Steve and Lucy, I was so struck by the scenario that I wrote a piece of speculative fiction about it. I’m currently trying to recreate, or reinvent, the story — see my posts on ART for the work in progress.)
Grand worked diligently on his creation for six years, refusing all funding from organisations who might demand in return some kind of “result” to some kind of “timeframe”. He nurtured Lucy a little like a special needs child, proud that she taught herself how to say “Arp!” and that she could, eventually, recognise a banana.
(If you’re curious, today Lucy lives in the UK Science Museum collection.)
Grand’s approach fascinates me because it’s so deliberately opposite to what tech developers have usually tried to do: outfit their creations in appealing ways to win hearts and convert leads. Lucy was actually upsetting to look at. She had a nasty rubber face. She frequently went without “clothes” (the bottom half of her could be dressed up in a sort of orang-utan “skin”). And her eyes were actual, movable, functional eyeballs that could see things.
Lucy was deliberately anthropomorphised not so that she’d make us comfortable, but so she’d do the opposite.
Takeuchi et al. (creators of the killer goo) are working towards a kind of skin-wrapped robot that smoothes and soothes. But I wonder how we might feel about our emerging technological playmates if they were packaged in ways that forced us to acknowledge what they weren’t.