Reader, I’m working on a project to enhance use of GenAI in university learning design.
No, really.
This isn’t quite an “if you can’t beat em, join em”, nor is it precisely a “smash the system from the inside”. More honestly it’s anthropological. (I say that a lot. I think I subconsciously identify as a different species constantly trying to observe the natives and figure out their dance moves.)
It’s an opportunity to explore at close range the space between rubber and road. Although we’ve had two years of aggressive hype, a near-incessant barrage of product releases, and many early adopter content designers leaping in with exploratory use cases, there remains an obvious gap between promise and practice. I’ve come loaded with assumptions and prejudices about the reasons for this (the promises are lies!), but this project forces me to engage in a way that sets these aside.
A dear friend introduced me to a concept that’s especially relevant here: affirmative critique.
To engage in affirmative critique is not limited to resistance against an argument or a position, but rather pays attention to the ‘not-yet’, namely, to what could be different and transcend the present.
In negative critique, we are defensive. We are trying to protect something from something. “Guard the doors!” “Don’t let the monsters in!” “Don’t let them witch you!” We see it essentially as subtractive. Affirmative critique isn’t additive (we’re not blindly welcoming the baddies), but it is generative.
In other words, not “yes, and”, but “yes, if”. I need to listen to the case for GenAI in education, acknowledging its vision is flawed, and open up speculation about how that vision might shift to one more productive and hopeful.
Affirmative criticism of GenAI
GenAI tools are not a complete solution to anything, but GenAI “solutions” continue to proliferate in the education discourse and demand a response. I have been saying no. Honestly, I’m still saying no. I have a lot of good reasons to say no. In affirmative critique, those reasons don’t go away. I’m not ignoring them when I say “yes”. But they mean I have work to do on my “if”. It’s not “no”; it’s “not yet” — so what conditions are required to manifest this unrealised potential? How can I stand with hope?
In essence: if I am criticising, it should be because I care. Critique isn’t (well, shouldn’t be) negativity for the sake of itself, but should come from a place of concern. And if I care (I really do), then I must have some idea about what’s at stake — and an affirmative critique is one that generates possibilities for enhancing, not just defending, those things that matter.
For me, I think three things are at stake in adopting GenAI in education.
One: Human relationships. Education is fundamentally about creating and sharing knowledge. (Sidebar: the “skills agenda” is a misdirect. Skills are things our bodies and minds know how to do — they are knowledge.) And knowledge only exists through knowers: teachers, learners, practitioners, scholars. People. GenAI is not a knower. It processes data, but it doesn’t know that data. When it comes right down to it, I want education to stay with human relationships.
Two: Professional ethics. There remain absolutely enormous ethical problems with all GenAI products and providers. It’s all stolen land. No developer has been, or possibly can be, fully transparent about where its datasets came from. No developer can claim its products are not accelerating the irreversible destruction of natural resources. If education becomes a space that adopts AI because it makes life easier, accepting these wrongs as “just the way things are”, our profession becomes just a little bit more evil.
Three. Societal norms. Education isn’t just about describing how the world works — it also plays an enormous role in defining it. Students come to us to learn how to exist in the world, and we tell them what we think they should know. These lessons shape what they do in the world. For example, the grammar our children learn in school becomes the grammatical norm of their generation. I am afraid, by creating labels like “AI literacy”, we are setting dangerous norms for AI universality with unknown consequences.
As always, this post is a spark, not a roaring fire. I can’t and won’t try to solve for these concerns in some neat little coda. But what I think I need to do is this:
reflect on how GenAI usage could not just maintain but improve human relationships in teaching and learning
consider how to balance AI’s ethical failings with the pragmatics of living in the world, which is already choked with ethical quandaries
engage with the educational community about the societal norms we wish to promote through our work.
Agnostic?
A note on that word “agnostic” in the title of this post. In October, a team at QUT published a report on the state of GenAI use by university staff in Australia. They’ve categorised staff as apostles, agnostics or atheists when it comes to GenAI adoption. I think this is an interesting set of labels, because it casts the enthusiastic adopters as followers of a religion. The authors characterise “agnostics” as lacking in capacity and skills. While accurate, I’d add dimension to this: an agnostic is someone who doesn’t know. It’s not just about not knowing how. It’s about not knowing whether to believe.
An atheist is a person who has definitively positioned themselves as a non-believer. A negative critic, if you like. An apostle is on a mission to spread belief. But an agnostic demands evidence.