You have /5 articles left.
Sign up for a free account or log in.
Today is ChatGPT’s first birthday, and already this novel application has had a profound impact.
One domain where this is particularly pronounced—and perhaps most discussed—is higher education. As a humanities professor who teaches the skills of critical thinking and analytic writing, I have witnessed firsthand the effects of ChatGPT on student writing and learning. I’ve seen everything from students relying on it for idea generation and content summaries to feeding it essay prompts and blatantly copying its outputs. I’m currently managing a tenfold increase in suspected academic integrity violations this semester—all due to ChatGPT.
Many of these deleterious effects have been both accurately predicted and, at this point, well established. But as I look back on how things have changed in just the past year, I see two further changes to teaching and academic writing that are both subtle and profound.
The first concerns the various roles that professors like me take on. First, and above all else, I am an educator: my job is to teach students the relevant content knowledge and skills. But unlike, say, a yoga instructor, who also educates their students, professors play a second role as assessor: we are tasked with assessing the student’s abilities with respect to that knowledge, and with communicating that judgment to our institution, which in turn communicates it (in the form of a grade point average, transcript and diploma) to third parties, like graduate and professional programs, future employers, and so on.
These two roles—educator and assessor—are not in deep tension. Indeed, assessment is often an essential tool for furthering educational goals. But the rise of ChatGPT has introduced new and confounding tensions between these two roles that pose a deep threat to the most central task of professors like me—namely, to educate.
In my field, philosophy, and many others like it, the bulk of our assessment comes from essays that students write outside of class. But, as many have pointed out, ChatGPT is quite good at producing plausible-sounding academic writing. Even if it can’t write the perfect essay in one attempt, clever students will be able to easily manipulate the outputs to suit the needs of the assignment—all while saving countless hours of difficult writing time.
Given all this, many professors are scrambling to find ways to “ChatGPT-proof” their assignments. For example, some professors now give much narrower and complex essay constraints, in the hopes of throwing ChatGPT off or rendering its use less efficient than writing the paper oneself. Others have switched to strictly using alternative writing assignments, like in-class essays.
Many of those who have made these changes felt they had no real choice: there is simply no other way to ensure students are actually doing the work themselves. And many (if not most) recognize that these methods are not ideal—not simply in terms of course administration, but also in terms of education. I find in-class essays completely inappropriate for philosophy—a discipline in which slow, deliberate thinking and a careful organization of ideas ought to be valued over quickly scribbling one’s first thoughts. I know many of my colleagues across the humanities and beyond feel similarly.
So, in order to evade ChatGPT, professors are now tasked with introducing substantial changes to their assessment methods. They are all but forced to do this—and this is important—even when doing so is substantially less effective in terms of student learning. That is, ChatGPT has forced professors to place greater emphasis on their role as assessor over that of educator. And this is only likely to get worse with time.
A second major issue concerns accountability for those students who do unjustly rely on ChatGPT. Unlike other forms of plagiarism, which can be more easily proven by identifying where students have taken their ideas from, plagiarizing from ChatGPT is essentially impossible to prove.
To be sure, there are detectors claiming to provide evidence as to whether a given essay is generated by artificial intelligence, but there are two problems. First, the “evidence” constitutes statistical probabilities that an essay was written by AI—not exactly a smoking gun. And, second, students will quickly learn, if they haven’t already, how to avoid detection through subtle manipulation of the grammar and syntax.
Essentially, the only way to hold a student accountable for using ChatGPT is to secure a confession. Soon, students will know this, and many will exploit it: use ChatGPT, and if confronted about it, deny, deny, deny. It’s not unreasonable to believe that many students who violate academic integrity standards—particularly when they know they can do so cost-free—will not suddenly feel compelled to be honest about having done so, particularly when honesty is likely to be incredibly costly to them.
In fact, if we are forced to rely only on student confessions, then the only students held accountable will be those who have had this rush of honesty. It is odd, if not unjust, to have a system of punishment that exclusively metes out sanctions to those who demonstrate remorse and regret for their mistakes and does nothing to those who show no such integrity.
As ChatGPT continues to evolve, it will only get better and better at producing quality writing. Our students will also grow increasingly comfortable using it, often in ways that will be difficult to detect and even harder to prove. We all must adapt, in one way or another, to ChatGPT and its seismic effects on higher education. But to do so without seriously reflecting on the true nature of these changes will only frustrate our ability to respond appropriately and effectively.
We’re only a year in, and already so much has changed. By this time next year, I can only imagine where we’ll be. I, for one, fear we’re in for many unhappy returns.