You have /5 articles left.
Sign up for a free account or log in.

A graphic depicting a blue square featuring the words "LLM: Large Language Model" against a textured green background.

GOCMEN/iStock/Getty Images Plus

One day this May during the end-of-term ritual that is the grading of papers, I came upon a description of a character that lacked any basis whatsoever in the novel we had been reading. Two years ago, I would have paused and thought this was too bad—that a student hadn’t really read the novel carefully or simply relied on a faulty memory of the scene involved instead of going back to confirm their take’s veracity. Not a big deal.

But now, a year and a half after ChatGPT’s release to the general public brought large language model (LLM)–based chat bots into everyday use, it’s different. (In an attempt to resist the common attribution of agency and/or anthropomorphic qualities to these programs, I’ll refer to them as “machine learning applications” and “LLM-based chat bots” rather than “artificial intelligence” or “AI.”) Instead of seeing this inaccuracy as something I could flag for the student to return to or to talk with me about, I had the sinking feeling that maybe what I’d been reading for a page and a half already was synthetic text extruded from a chat bot rather than the textual representation of the ideas of a young woman whom I knew and had shared a university seminar room with twice a week.

Instructors can have their reasons for asking students to engage with chat bots for their academic work, but for this assignment the use of chat bots did not align with my learning outcomes. We had also talked at length in class about the problematic nature of LLM-based chat bots’ function in relation to the kind of work we were doing in the class (and in general), and I had gone through a class exercise showing the kinds of erroneous responses ChatGPT will give to very basic questions about a literary work.

I had also altered a number of my assessments so that the temptation to use chat bots would be significantly lessened. Yet I couldn’t help suspecting that this particular erroneous sentence was likely a chat bot’s text, not attributed as such but claimed as the student’s own ideas and words. After reading the rest of the essay and not encountering other red flags, I decided to reconsider my suspicion. In the immortal words of Kurt Vonnegut, and as so often in the life of a university professor at this moment: so it goes.

As not only an instructor in the humanities but also my university’s academic integrity director, my year and a half since the unveiling of the slew of “generative AI” programs to the general public has been saturated with discourse about these applications and their intersections with education and writing. I’ve been reading much more about algorithmic technologies than I ever thought likely and hearing and talking about them in faculty colloquiums, administrative meetings and student-led initiatives, as well as in regular conversations with faculty members after they realize students have been trying to pass off LLM-based chat bot text as their own. Not to mention in the hallways and at the proverbial water coolers.

In all these forums, I heard and read different tropes about these machine learning applications. There was the adage about how “everyone freaked out when calculators came out, too.” There was the performance in which the first speaker at a meeting read an overly reflective and abstract opening statement about “AI” and then—beat—announced that what they had just read was generated by ChatGPT.

The one that has rankled me the most, though, and that I have seen the most, is the false dichotomy suggesting that instructors are either embracing the new frontiers these LLM-based chat bot technologies will open up or they are afraid of them. As if there weren’t any (or many) other options.

As you might be able to tell already, I’m not a fan of these technologies (a fair approximation of my overall view of the whole issue can be found in Ulises A. Mejias’s and Nick Couldry’s summary of their recent book here, while my own shot at a take can be found here). And this is where I’m supposed to acknowledge (another trope) that “these machine learning applications can have all kinds of benefits, even though there can be real harms.” But in the realm of cultural analysis, critique and interpretation, I have yet to have anyone show me any way that the chat bots are genuinely helpful. And that’s not even touching on all the ecological, bias and copyright issues. I’ve seen how they can be helpful in some computing contexts, and they may end up being good for business analytics, etc., but I have no reason to defer to OpenAI, Microsoft or Alphabet on the future of humanities education.

Regardless, what I want to underline is that the false dichotomy of “embrace” and “fear” that I continue to hear echoed again and again is just that: false. And this false dichotomy occludes and prevents more nuanced responses and acknowledgment of possible responses to our machine learning–saturated moment. I do not embrace the miraculous technofuture, nor am I afraid of this technology. But what sinks my heart are the ways in which the chat bots have unfortunately impacted my ability to trust my students.

Let me state here very clearly that if a student uses a chat bot and cites that use, this is not at all what I’m discussing here. Some instructors ask their students to use LLM-based chat bots or allow them to as long as they cite them. That’s fine. I’m not in favor of that approach, but I understand that other instructors have other learning outcomes and goals for their students, and I am not wanting to dispute those outcomes, goals or practices here. Even if my own students—whom I ask not to use chat bots—were to use them and cite that use, while I wouldn’t accept that as a well-done assignment, it wouldn’t affect my trust.

I assume students don’t think about their unattributed use of chat bots as affecting a personal relationship. But those of us who actually still believe in the edifying power of higher education can’t see the relationship between instructors and students as one of instrumental exchange—products (assignments filled out) for payments (grades). Or as one of mechanical input and output. In the classroom, in office hours, and in conferences, there is (can be) a genuine mutual sharing between persons if we strive for it, if we foster dialogue and sharing of perspectives in our common scrutinizing of reality and pursuit of truth. And the making and assessing of assignments is (can be) an extension of that relationship’s mutual sharing. But to engage in that scrutiny and that pursuit in common, the relationship between instructor and student requires integrity—that is, both parties need to be honest in their communications with one another.

In what we’ve traditionally understood as cheating, plagiarism and fraud, the foundational problem isn’t that one isn’t putting the work in, but that one isn’t representing themselves honestly to others. Being social beings, we have to be able to rely on one another. In an academic setting, part of that relying on one another is the ability to trust that what a student tells me in their academic work accurately reflects how it was made. If a student pays someone else to write an essay and submits that essay with their own name on top, the submission is a lie—it is duplicitous and does not accurately reflect how the essay was made due to the conventions of what putting one’s name at the top of a piece of paper or a file means. One’s name at the top is a claim that “I have made this thing.” When words or ideas in that thing are not ours, we cite where we found those words and ideas. This convention of citation assures our readers or viewers how it is that we can have words or ideas that are not our own in our work and yet the work still reflects the reality of how it was made. The matching of representation with reality makes for integrity and builds trust.

The essay I read in May might have been representing the reality of how it was made. And maybe it wasn’t. But the hype and the constant discussion surrounding LLM-based chat bots and, frankly, the fact that I have seen so many cases of students trying to pass synthetic text off as their own without appropriate attribution have for the moment conditioned me to mistrust my students somewhat. And I hate that.

I went to college because I found an Old English poem in a library book once in my early 20s (I had to go to college to find out how this Old English stuff worked). And I went to graduate school because I had found the riches of a liberal education far more valuable and exciting than other paths I could see as possibilities for my life. I became a (non-tenure-track) professor because I wanted to share all the things I’d learned with folks coming up in the world and to learn from them as well. I’ve stayed because I love the common scrutinizing of reality that I get to do with my students and my colleagues every day. We’re striving for knowledge together and, dare I say, wisdom. And we need to be able to trust one another to continue that dialogue. I’m looking forward to when this moment has died down and we can focus on other more important matters again. Like celebrating the arts. Like questioning systems of power and thought. Like compelling arguments. Like how we can imagine better futures for ourselves and one another.

In the meantime, I’ll likely continue having versions of this conversation I had with a student earlier this term:

Student: Why can’t I just use a chat bot to write this essay?

Me: Because I don’t care about what OpenAI’s products can do. I care about what you’re thinking.

Jacob Riyeff is a teaching associate professor and the academic integrity director at Marquette University.

Next Story

Written By

More from Views