I was struggling a little to get up the enthusiasm to engage with the conversation in 2025. The moment that LinkedIn yawned and stretched awake from its Christmas coma, GenAI was back again. I’m in a particular kind of bubble — I get it. My professional network is dominated by “knowledge workers”, shouting to get their various commentaries heard. Their (our) problem: the work they do is Big LLM’s juiciest and lowest-hanging peach. Their work is words; their work is ideas; but most importantly, their work is digital. It’s all swirled up in that imaginary soup of ones and zeroes that have just been aggressively slurped up by the GenAI machine. The evangelists and the doomsayers alike are screeching for the last word, but every single one of them is sitting in the same sinking boat. Me included.
I won’t try to deconstruct the various threads of their arguments. Some guy named Casey did that last month and got lynched for it, and it was all very interesting and ultimately irrelevant because whether AI is real or not, whether AI can do great things or not, whether AGI is coming or not, AI is happening to us. And as far as I can tell, it’s not benefiting anyone more or less than anyone else (except for the sweatshop labourers).
It’s heartening, though, to begin to see a quickening of the AI refusal movement in education. I know none of us can refuse it all. It’s not possible to opt out of every platform that’s given itself an AI makeover in the past 26 months. But we can state our position, pursue our preferences and show others (from tech developers to developing minds) that AI refusal is a legitimate stance.
Since the year began, Catherine Denial has published this short, sharp statement of her position on saying no to Generative AI. Josh Eyler debated on behalf of the resistance to AI in university teaching. Helen Beetham eloquently described the malAIse I also feel. Audrey Watters (“EdTech’s Cassandra”) is back and speaking truth.
I don’t have anything like the influence of these amazing humans, but I’d decided I wasn’t going to commentate (*cough* much). So I’m incredibly glad to have contributions like these to point to when I say: same.
I’m also starting to find more and more people interested in “digital degrowth” and “rewilding” of digital spaces. (Thank you Eamon Costello, whose Hack This Course: Digital Pedagogy Rewilded I can’t stop sharing with my learning design students.)
This is exciting for me as a Digital Learning Person who has grown weary of the word “innovation”, because the rewilding movement is redefining innovative pedagogy. That is, it’s reclaiming it from technology and managerial efficiency, aiming to restore a messy proliferation of ways to know and do and be and share and create and learn, because nothing new grows well in a concrete garden.
I guess all this to say: I share Helen Beetham’s malaise for AI, but more than that: AI conversations are straight up boring. There’s really nothing new going on. Gary Marcus will give you the whole spiel about the AGI lie; he’ll also give you a cheeky warning about trusting AI agents (spoiler: don’t). Part of me wants to go on a rant about the complete misuse of the word “agent” here to refer to stochastic smart fridges, but a much bigger part just wants to change the bloody subject.
There’s so much else to talk about. There’s so much else to think about. There’s so much to learn, and so much to do — on this actual planet, which needs us.