You have /5 articles left.
Sign up for a free account or log in.
I will admit that when I was teaching four sections of writing-intensive courses (predominantly first-year writing) per semester, I spent very little time worrying about quote-unquote “academic integrity.”
Don’t get me wrong; I was against my students not doing their own work, but given the nature of what I asked them to do, and the manner in which they were assessed (heavy emphasis on process and student reflection), I didn’t have to worry all that much about the issues that fall under the academic integrity umbrella.
But in our generative AI world, in which students have easy access to syntax-generating large language models capable of producing potentially passable (and passing) outputs, it seems impossible not to worry about academic integrity. Students passing classes where they haven’t done any work is definitely a problem.
After having considered academic integrity only in passing, I wanted to spend a few pixels on working through some thoughts about how we might have better, deeper discussions regarding academic integrity issues. At this stage, much of this is me talking to myself, but at least it’s a start.
It strikes me that if we’re going to talk about academic integrity, we have to be very precise about what we mean by those words. There’s a lot of different facets to the concept.
One facet is considering academic integrity as a way to ensure a fair basis of comparison between students. If student A is cheating and student B is doing their own work but are both receiving the benefits of the course credit and institutional credential, we have a problem.
It’s not a new problem, though. It would be naïve to suggest this wasn’t happening prior to the advent of ChatGPT. Chegg reportedly became at $12 billion company by “getting rich off students cheating through Covid.”
ChatGPT makes this kind of cheating both more accessible and more affordable.
One path to dealing with this challenge is to try to police and punish unauthorized LLM use that is declared as “cheating” in a particular class context. This strikes me as unpromising for a number of reasons:
- We have no reliable method of detecting LLM outputs and distinguishing them from human-generated writing, and probably never will.
- All energy put into detection and policing is energy not going into teaching and learning. Surveillance tech like Proctorio primarily serves as a way to frighten and distract students as they’re attempting to demonstrate their knowledge. During those semesters when I was carrying student loads double the recommended disciplinary maximum, I had zero time for additional activities. Adding LLM detection would inevitably take away from something else.
- Policies on using generative AI may vary from course to course, creating significant potential for student confusion and, I would argue, increased cynicism toward their academic work.
There’s another choice if we’re only concerned about academic integrity from the point of view of making sure there is a level playing field: Release the ChatGPT kraken!
If everyone can use the tool without restriction, then the field is level, right? It seems like I’ve read some very important people tell me something along the lines of “AI won’t take your job, but someone using AI will.” If this is true, why shouldn’t we habituate and acculturate students to this world ASAP?
I’m imagining at least a few of you are blanching at the thought, believing that this significantly devalues what a course and credential is meant to signal, namely that a student can be certified to have acquired some meaningful knowledge or engaged in some meaningful educational experience related to a particular discipline. Plugging things into an LLM and pasting the results into a document and putting your name at the top does not qualify.
For my money, I believe that the work of school and employment in a capitalist marketplace are not the same thing. Efficiency and productivity, important aspects of our markets, are not values we should necessarily associate with learning. That these values have become not only present but even dominant in how we think about schooling strikes me as a mistake that we should seek to rectify, at least if we’re going to hold on to the notion that school is for learning.
Obviously, our thinking about academic integrity has to go well beyond merely thinking about leveling the field for students to compete with each other on achievement. This was true before LLMs, and it’s only more true now.
The debate about academic integrity sometimes reminds me of the debate about “rigor,” where we let surface-level indicators suffice when we should be having deeper conversations about why we believe rigor is important. What is rigor meant to achieve?
For example, some may believe that reading lots and lots of pages in a course makes that course rigorous. But does it? Reducing rigor to this metric suggests that the amount of time one spends on course-related activities is the key, but is running one’s eyes over thousands of pages of reading a truly rigorous experience, or is it merely time-consuming?
I would argue what students do with their reading is a far more important determiner of rigor than how many pages are read. I’ll go further and say that a good sign of a rigorous course is how much time and energy students put toward the course that is not necessarily mandated by commands like reading lots of pages or writing lots of words.
My view is that the most rigorous course is one that engenders lots of student effort without having to exercise a lot of instructor power to command student production. This removes the coursework from the world of transaction and moves it into the land of learning. It also helps students develop the important skill of self-regulation.
Something similar has to happen with academic integrity in a world where LLMs are now ubiquitous. We need to think about academic integrity as a bigger concept rooted in educational values, values that are tied to student engagement, effort and learning.
I’m convinced we’re significantly underestimating the degree and kinds of changes that need to happen in educational institutions to deal with the existence of generative AI technology. These changes need to deal not only with the technological capabilities, but also with the deterministic way the technology is being framed by those who are developing and boosting it.
Some of this boosting is happening inside of higher education institutions that have decided—without a ton of hard evidence, by the way—that AI is an inevitable part of our collective and individual futures. I have no desire to wall education off from artificial intelligence, but the notion of its inevitability is something I think we should resist with what remains of our might.
If education is going to be truly meaningful, it has to preserve human agency. A future where we’re subservient to our AI overlords doesn’t sound like a good one to me in general, and definitely not a good one for higher education institutions in specific.
In terms of academic integrity, I think this ultimately points the way toward figuring out how to make issues of integrity integral to the individual students who are making choices about their own educations. If the work is meaningful, if the experience of being educated holds value, students will act with the kind of integrity we desire.
How that culture is brought to life is the most interesting question for me.