If you’re involved with the education industry – even if your head is currently buried under a pile of grading – you’ve probably recently caught wind of something called ChatGPT or what many see as the end of learning as we know it. ChatGPT-3 is a natural language processor that’s been trained on a lot of data and also trained to improve upon its responses. It is a chatbot with ample resources that can produce pieces of writing in response to specific prompts. While folks have actually been discussing the impact of AI writing tech since the spring of this year (and, in fact, much earlier), it was a recent Atlantic article on how ChatGPT will kill the college essay that finally turned my head. I won’t link to it because it’s basically cringy fear-mongering from a former academic who is apparently both ignorant of contemporary writing studies pedagogy and the impact on the humanities of the systemic defunding of higher education in the U.S. The main takeaway of the article is: English teachers should watch out because these AI innovations mean students will plagiarize your assignments and render education pointless.
This is absurd for a number of reasons that I’d like to briefly discuss here: 1) there are already a myriad of digital technologies that allow students to plagiarize essay prompts (sparknotes, essay mills, etc.) and this AI produces substandard writing compared to these other options, 2) it’s really the non humanities instructors who should be shaking in their boots because it does, in fact, do well at answering questions with relatively straightforward answers – an approach to assessment still widely used in the social and natural sciences (in fact, I saw a good number of computer scientists balking at how well it produced code that they usually assign for homework), 3) for years, English / Writing Studies pedagogy has urged the abandonment of the standard college essay because not only is it easily plagiarized (even before AI), it has very little to do with any kind of real-world style of writing and students have no investment in this kind of writing, 4) the AI really can’t do research – it fabricates sources, doesn’t quote or analyze sources when asked, etc. It definitely can’t do original research, and because of its various limitations, it can’t produce writing on any text that isn’t in its training set.
For instance, I considered having it write part of this blog post to illustrate its capabilities, but what I have to say is in response to all the other ChatGPT thought pieces I’ve read – all of which are too contemporary to be in its training data. Even if they were included, it doesn’t do well synthesizing information from multiple sources or even adequately representing those sources. In “Prompt Whispering: Getting Better Results from ChatGPT” by Leon Furze (linked below), the author uses the example of having ChatGPT analyze a quote from Jane Austen’s Pride and Prejudice. Furze points out that the chatbot manufactures quotes and citations from non-existent secondary sources and concludes that while it is clearly good at mimicking academic analysis of text, it is not good at research. However, he does not point out that the quote it picked from the book to illustrate its claim about marriage – a quote supposedly uttered by Mr. Bennet – is also manufactured. Not only does it provide a fake quote, the chatbot also completely misreads the quote, stating that the quote shows how Mr. Bennet thinks marriage is only a means to advance one’s social standing when, in fact, it simply shows his disdain for Mr. Collins and the other young men that had been by the house. Even if this quote was a real quote from the book, everyone who has read the book or seen any of the film adaptations knows Mr. Bennet does not want Lizzie to marry only to advance her social standing as the chatbot claims in its analysis.
I have had a chance to play around with it to ask some of the questions I tackle with my literature students about different readings. It does a fair job mimicking the kind of equivocating tone you can find on summary websites and providing generalized answers, but there are always major issues with the outputs. For instance, every time I ask it to provide textual evidence from the reading as part of the analysis, it makes up the quotations. Out of three prompts using three different readings, all three contained manufactured quotes. Even when I provide a real quote from a text and ask it to analyze it, it clearly does not understand the quote in the context of the larger reading. It will take the quote I provide and break it up and essentially summarize the parts of the quote in service of whatever I’m asking it to look for, but it often misreads the quote because it does not understand how it fits into the larger reading. And when I specifically asked it to quote from a real interview with an author, it, again, manufactured quotes and a realistic MLA citation for the fake interview – with a fake url and everything.
It is, as so many have already said, a Class A bullshitter.
Even so, it is a good mimic of bad-to-average student writing. You might be asking yourself, how will *I* be able to “catch” these chatbot-generated passages when assessing student writing? My short answer is that you probably won’t be able to catch them all. The contemporary university system is hell-bent on creating a factory-model education system by packing our courses over recommended caps, giving largely precarious faculty unwieldy course loads, and, in attempt to “measure” learning, standardizing courses to the point that they almost urge students to cheat them. When we seek to make education plug-and-play, why are we surprised and appalled when students treat learning the same way?
My long answer begins by urging you to ask different questions instead. Ask yourself if your assignments are easily answerable by a bad AI. Ask yourself if you’ve created a hostile learning environment where the student feels the only way to succeed in your class is to use a bad AI. Ask yourself if there is a way you can use this bad AI to teach your students about writing, ethics, and/or digital technology. It’s telling if our first question to any new technology is, “how will I stop students from cheating?” It should make us stop in our tracks and consider what the heck we (faculty, institutions, policy makers) are even doing here.
Below are some of my favorite nuanced takes on the potential role of the chatbot.
How Will ChatGPT Change the Future of Information (Bender’s critique – it cannot understand! – is on point)
Prompt Whispering: Getting Better Results from ChatGPT (article referenced above for the P&P example)
AI Can Save Writing by Killing the College Essay (cool example of bringing it into the classroom)
ChatGPT Can’t Kill Anything Worth Preserving (more evidence of GPT bullshit & call for better writing pedagogy)
ChatGPT Is Dumber Than You Think (good response to the first Atlantic article)
ChatGPT Zotero List for Educators (excited to check this out!)
Resources for Exploring ChatGPT and Higher Ed (an excellent list of additional readings!)
BTW, I’ve left Twitter for obvious reasons, but you can find me over on Mastadon (which I highly recommend btw) and LinkedIn
2 thoughts on “ChatGPT Wins Best in Class for Making Shit Up”
ChatGPT can be rather hit and miss at times but it is rapidly improving and with big funding from Microsoft its apparent that AI is here to stay
No doubt AI will get better and better, but for now, this version of ChatGPT makes things up even when the actual text is available in its training data. Because it can’t reason; only imitate reasoning 🤷♀️ Stochastic Parrots – https://dl.acm.org/doi/10.1145/3442188.3445922
LikeLiked by 1 person