Do you feel as I feel about the breathless adoption of generative AI at all levels of education? If you are an education professional who has the resources, the freedom, and the will to support this call, please sign this open letter.
This letter addresses the harm AI does to students' ability to think and express themselves and the unethical way in which most of the GenAI tools have been trained; but does not sufficiently emphasise the unreliability of the information proffered by these AI tools, due to hallucination, AI inability to detect errors in transcription or to distinguish facts from jokes when trawling social media, amplification of errors by repetition, etc etc.
This is true, Jane - the quality of outputs continues to be shocking, and because of the fundamental nature of these products as text prediction programs (without capacity to reason or discern fact from fabrication), this lack of quality and veracity is something that will not change.
This is definitely relevant and seriously problematic. I'm not sure why we didn't emphasise it!
This letter is wonderful - thank you so much for writing it! I haven't yet received an e-mail to verify my signature - does it usually take a while? I've tried both my professional and personal e-mail addresses to no avail and checked my spam folders.
Sara, thank you!! I signed the Dutch letter on the same topic yesterday, which uses the same open letter website as a platform, and it took about half an hour before I got the verification too... I didn't realise this could take so long, I'm sorry! Hopefully it comes soon.
Hi Sarah (sorry for misspelling earlier!) I really appreciate that you kept a lookout, and sorry the platform hasn't come through for verification... it's a great resource to exist but I'm sure underfunded and not perfect :)
Thank you again so much for vocally supporting this, it means so much to me.
Funny you should ask that! A friend reached out to ask if there was something like this that students could get involved with, because their kid was really outspoken and concerned about the forcing of AI in their education.
I've suggested that students could sign this as well, but it's not really written from that perspective. I'd love if a student community did do something like this in their own voices.
As for companies, the reality is they are working it out by adopting and then losing millions. This is a pattern sweeping out everywhere, including Microsoft most recently with some godawful number of layoffs. I don't see a way to stop this, which makes me so sad because jobs will continue to be cut as companies realise their losses and try to shore up their finances.
Maybe someone with better business sense and influence than me might see another way :s
I consider myself an AI pragmatist. While, again, I completely understand where you are coming from, I just don't know if the long term strategy is to resist. Pew Statistics just came out that show the number of adults who've used AI has doubled since 2023 (34%) and 58% among those under 30. Your friend's child should continue to advocate for AI free spaces, but it seems he/she may be in the minority on this issue (why can't these kids be against cell phones also? :) - from my vantage point as a HS educator, AI is everywhere and if we abdicate our authority on this issue maybe we lose credibility. I don't know. I don't think it's inevitable in colleges and schools - at least not in the way it's being pushed by the tech companies - but I do think it is inevitable as a force in larger society and opting out just doesn't seem like the best way to exert influence. So I applaud you for taking a stand and continue to fight the good fight, but I can't sign your letter. I also do use AI in a lot of useful ways so that's an entirely separate issue, but I do share your larger concerns and they are important issues to raise.
Thank you Stephen, I really appreciate your thinking :)
Out of curiosity (not pressure!!) what about this do you see as abdicating authority? That's certainly not my intention - I carefully avoided the implication that we would avoid talking about GenAI in any way, only stated that we wouldn't teach "AI literacy" as a technical skill in place of the real skills we are there to teach.
I'd love to talk more about ways that might effectively exert influence, as this is such a sensitive and complex issue. I don't think my friend's kid is in the minority, but it may seem like it because speaking up is really hard and dangerous. I've spoken to SO many people over the past 24 hours who have said they want to sign but can't because they need their jobs. Think of the power differential for children and young people -- it breaks my heart.
It's a great question. I've written about my sense of the genAI moment and education extensively on my substack, but I guess where I get stuck is how politically loaded your letter is, stating as facts ( unacceptable legal, ethical and environmental harms...) issues which are selective. Unacceptable to whom? For example, why are the harms of AI worse than the harms of the exploitation of factory workers manufacturing iPhones in China or in the warehouses of Amazon? That's not to say these are not very real issues - I get your environmental bona fides and deep concern for the future of our planet - it's just that, as written, there is nothing in the letter which acknowledges that the issue may be more nuanced than it's stated. I don't think anyone says that AGI will "render human intellectual and creative labor obsolete." So, maybe abdicating authority is the wrong phrase, but if, as I teacher, I come into my classroom and throw down the gauntlet laying out my position on any polarizing issue (take your pick - I teach Government and Politics in the United States!), I've just lost half the class, and isn't it my job to let them decide for themselves? What about the student who has found brilliant ways to use AI to overcome learning issues or equity issues? Where do they fit here? One last point - I think the strongest argument currently on the side of keeping AI out of schools is the most important - it's unproven effect on student learning goals. It's not that the other issues don't concern me - they do, but my not using AI is not going to have much of an impact there - but how I actually approach and talk about AI in my own practice with my students is what matters most. And I personally don't think demonizing AI is the answer. But that's me. I understand it's an incredibly divisive topic because everyone brings their own bias and concern to the issue (of which there are many!). Thanks for being willing to listen!
I understand! I suppose the important thing to note is that "unacceptable" is a personal judgement, and the letter is written from the perspectives of those for whom the harms are unacceptable.
For me, the legal harms are unacceptable in that companies that have broken copyright law are now trying to CHANGE the law to protect themselves retroactively. The ethical harms are unacceptable because of the traumatic exploitative labour, the psychosocial harms for users, the evidenced reality that all LLMs produce incorrect content between 30-90% of the time, and so on.
It is political, you're right - it's a statement of power. My institution is trying to force me to teach genAI use skills to my students, which is an act of power. I am saying no, which is an act of resistance. I haven't got a lot of levers to pull, sadly.
I hope it doesn't appear to be "demonising" AI, because that wasn't the intent either, but it is certainly meant to indicate that current publicly available LLMs are unacceptable products to make students use. The demonisation is corporate, not inherent to all technological potential :)
I really appreciate this context. Teachers should not be forced to do something they ethically don't believe in so I understand your dilemma. I do have a slight ulterior motive - I have run and will continue to organize and run some professional development seminars / conferences / workshops (whatever term you are used to) on AI in the classroom and I want to be able to reach members of the audience who hold views similar to yours. Not to convince you of anything, but by taking the position that LLMs are "unacceptable products to make(?) students use - are they really making them? aren't they using them anyway?, are you putting yourself in a position where you may not be able to be as effective a teacher as you might be? I don't know your school or precisely what genAI skills they are trying to force you to teach, but is it possible that there is a legitimate institutional purpose for this? I get it's one you don't believe in, but is it entirely wrongheaded? I'm just asking. On the substance of some of your points, I do think it's imperative that everyone get their facts straight (and the facts are not so easy to find!) in order to have the most productive conversation. On the legal issue, there is no question that these companies broke copyright laws, but a Federal judge in California recently ruled that training LLMs on legally obtained content falls under Fair Use. So, if going forward, LLMs are training on material they have legally purchased (I know, a big IF), it will technically be "legal." So I'm not sure what you mean when you say trying to CHANGE the law retroactively. And I'm also not sure of your source when you say 30% - 90% (which is a huge range to start with) is "incorrect content." Hallucinations are a huge issue, but there are some effective ways to mitigate those results and it varies by company, model, and type of prompt. Obviously, enough people find the tools useful despite these issues. But I thank you again for laying your cards on the table. I will think of you when I get the inevitable question in my next session about all the harms caused by AI!
Thank you so much Graham - and for sharing it on! This is a challenging message in Australia as we are a very compliance-driven culture and these kinds of statements bear a lot of risk. I truly appreciate your solidarity!
Your note, though impassioned, seems to make so many jumps in logic (not the least of which are hasty generalizations) that I suggest a review with other to dig further to make it more persuasive.
Might I suggest working through your thoughts with a large language model (LLM), like ChatGPT, Claude, or Gemini? They have free tiers.
This letter addresses the harm AI does to students' ability to think and express themselves and the unethical way in which most of the GenAI tools have been trained; but does not sufficiently emphasise the unreliability of the information proffered by these AI tools, due to hallucination, AI inability to detect errors in transcription or to distinguish facts from jokes when trawling social media, amplification of errors by repetition, etc etc.
This is true, Jane - the quality of outputs continues to be shocking, and because of the fundamental nature of these products as text prediction programs (without capacity to reason or discern fact from fabrication), this lack of quality and veracity is something that will not change.
This is definitely relevant and seriously problematic. I'm not sure why we didn't emphasise it!
Bravo 👏
This letter is wonderful - thank you so much for writing it! I haven't yet received an e-mail to verify my signature - does it usually take a while? I've tried both my professional and personal e-mail addresses to no avail and checked my spam folders.
Sara, thank you!! I signed the Dutch letter on the same topic yesterday, which uses the same open letter website as a platform, and it took about half an hour before I got the verification too... I didn't realise this could take so long, I'm sorry! Hopefully it comes soon.
Thank you! I'll keep looking. :)
Oddly, I still haven't received an e-mail to verify my signature... but please consider my signature verified!
Hi Sarah (sorry for misspelling earlier!) I really appreciate that you kept a lookout, and sorry the platform hasn't come through for verification... it's a great resource to exist but I'm sure underfunded and not perfect :)
Thank you again so much for vocally supporting this, it means so much to me.
I tried using my Proton e-mail address, and it worked, much to my joy! I'm very happy to have been able to sign. :)
I'm surprised to find that Proton is working really well for me :) How long have you had an account?
I'm completely empathetic but are the students and the companies many will work for going to sign one as well?
Funny you should ask that! A friend reached out to ask if there was something like this that students could get involved with, because their kid was really outspoken and concerned about the forcing of AI in their education.
I've suggested that students could sign this as well, but it's not really written from that perspective. I'd love if a student community did do something like this in their own voices.
As for companies, the reality is they are working it out by adopting and then losing millions. This is a pattern sweeping out everywhere, including Microsoft most recently with some godawful number of layoffs. I don't see a way to stop this, which makes me so sad because jobs will continue to be cut as companies realise their losses and try to shore up their finances.
Maybe someone with better business sense and influence than me might see another way :s
I consider myself an AI pragmatist. While, again, I completely understand where you are coming from, I just don't know if the long term strategy is to resist. Pew Statistics just came out that show the number of adults who've used AI has doubled since 2023 (34%) and 58% among those under 30. Your friend's child should continue to advocate for AI free spaces, but it seems he/she may be in the minority on this issue (why can't these kids be against cell phones also? :) - from my vantage point as a HS educator, AI is everywhere and if we abdicate our authority on this issue maybe we lose credibility. I don't know. I don't think it's inevitable in colleges and schools - at least not in the way it's being pushed by the tech companies - but I do think it is inevitable as a force in larger society and opting out just doesn't seem like the best way to exert influence. So I applaud you for taking a stand and continue to fight the good fight, but I can't sign your letter. I also do use AI in a lot of useful ways so that's an entirely separate issue, but I do share your larger concerns and they are important issues to raise.
Thank you Stephen, I really appreciate your thinking :)
Out of curiosity (not pressure!!) what about this do you see as abdicating authority? That's certainly not my intention - I carefully avoided the implication that we would avoid talking about GenAI in any way, only stated that we wouldn't teach "AI literacy" as a technical skill in place of the real skills we are there to teach.
I'd love to talk more about ways that might effectively exert influence, as this is such a sensitive and complex issue. I don't think my friend's kid is in the minority, but it may seem like it because speaking up is really hard and dangerous. I've spoken to SO many people over the past 24 hours who have said they want to sign but can't because they need their jobs. Think of the power differential for children and young people -- it breaks my heart.
It's a great question. I've written about my sense of the genAI moment and education extensively on my substack, but I guess where I get stuck is how politically loaded your letter is, stating as facts ( unacceptable legal, ethical and environmental harms...) issues which are selective. Unacceptable to whom? For example, why are the harms of AI worse than the harms of the exploitation of factory workers manufacturing iPhones in China or in the warehouses of Amazon? That's not to say these are not very real issues - I get your environmental bona fides and deep concern for the future of our planet - it's just that, as written, there is nothing in the letter which acknowledges that the issue may be more nuanced than it's stated. I don't think anyone says that AGI will "render human intellectual and creative labor obsolete." So, maybe abdicating authority is the wrong phrase, but if, as I teacher, I come into my classroom and throw down the gauntlet laying out my position on any polarizing issue (take your pick - I teach Government and Politics in the United States!), I've just lost half the class, and isn't it my job to let them decide for themselves? What about the student who has found brilliant ways to use AI to overcome learning issues or equity issues? Where do they fit here? One last point - I think the strongest argument currently on the side of keeping AI out of schools is the most important - it's unproven effect on student learning goals. It's not that the other issues don't concern me - they do, but my not using AI is not going to have much of an impact there - but how I actually approach and talk about AI in my own practice with my students is what matters most. And I personally don't think demonizing AI is the answer. But that's me. I understand it's an incredibly divisive topic because everyone brings their own bias and concern to the issue (of which there are many!). Thanks for being willing to listen!
I understand! I suppose the important thing to note is that "unacceptable" is a personal judgement, and the letter is written from the perspectives of those for whom the harms are unacceptable.
For me, the legal harms are unacceptable in that companies that have broken copyright law are now trying to CHANGE the law to protect themselves retroactively. The ethical harms are unacceptable because of the traumatic exploitative labour, the psychosocial harms for users, the evidenced reality that all LLMs produce incorrect content between 30-90% of the time, and so on.
It is political, you're right - it's a statement of power. My institution is trying to force me to teach genAI use skills to my students, which is an act of power. I am saying no, which is an act of resistance. I haven't got a lot of levers to pull, sadly.
I hope it doesn't appear to be "demonising" AI, because that wasn't the intent either, but it is certainly meant to indicate that current publicly available LLMs are unacceptable products to make students use. The demonisation is corporate, not inherent to all technological potential :)
I really appreciate this context. Teachers should not be forced to do something they ethically don't believe in so I understand your dilemma. I do have a slight ulterior motive - I have run and will continue to organize and run some professional development seminars / conferences / workshops (whatever term you are used to) on AI in the classroom and I want to be able to reach members of the audience who hold views similar to yours. Not to convince you of anything, but by taking the position that LLMs are "unacceptable products to make(?) students use - are they really making them? aren't they using them anyway?, are you putting yourself in a position where you may not be able to be as effective a teacher as you might be? I don't know your school or precisely what genAI skills they are trying to force you to teach, but is it possible that there is a legitimate institutional purpose for this? I get it's one you don't believe in, but is it entirely wrongheaded? I'm just asking. On the substance of some of your points, I do think it's imperative that everyone get their facts straight (and the facts are not so easy to find!) in order to have the most productive conversation. On the legal issue, there is no question that these companies broke copyright laws, but a Federal judge in California recently ruled that training LLMs on legally obtained content falls under Fair Use. So, if going forward, LLMs are training on material they have legally purchased (I know, a big IF), it will technically be "legal." So I'm not sure what you mean when you say trying to CHANGE the law retroactively. And I'm also not sure of your source when you say 30% - 90% (which is a huge range to start with) is "incorrect content." Hallucinations are a huge issue, but there are some effective ways to mitigate those results and it varies by company, model, and type of prompt. Obviously, enough people find the tools useful despite these issues. But I thank you again for laying your cards on the table. I will think of you when I get the inevitable question in my next session about all the harms caused by AI!
Signed
Brilliant initiative Miriam. Signed in my capacity as a school governor.
Thank you so much Graham - and for sharing it on! This is a challenging message in Australia as we are a very compliance-driven culture and these kinds of statements bear a lot of risk. I truly appreciate your solidarity!
Your note, though impassioned, seems to make so many jumps in logic (not the least of which are hasty generalizations) that I suggest a review with other to dig further to make it more persuasive.
Might I suggest working through your thoughts with a large language model (LLM), like ChatGPT, Claude, or Gemini? They have free tiers.
Hi Colton. It looks like you subscribe to this blog, which is great! Might I suggest reading it?
Are you open to discourse?