You have /5 articles left.
Sign up for a free account or log in.
Thai Liang Lim/Getty Images Signature
This past spring and summer, we interviewed dozens of students across diverse disciplines who used ChatGPT and other large language models (LLMs) for academic work. In interviewing the students, who attended a highly selective private research university, we had three main questions in mind—what students were doing with AI tools, whether faculty were endorsing or prohibiting their use, and how students felt ethically about it.
If interviews with students tell us anything, it’s that an ever-growing number of students are turning to AI as a first resort for almost everything following OpenAI’s public release of ChatGPT in November 2022. All of this points to an “AI inevitability” in academia in terms of students assuming its fair use.
The way students see it, many jobs and industries don’t care as much about the process—as opposed to the product—as we do in academia. Whether they are correct or not, students who believe this are likely to value AI-aware class environments that provide them opportunities to better learn how to responsibly use tools that could enable their promotion and professional advancement in postgraduate careers. Anything that makes college easier in the meantime is a bonus.
Over all, our findings also point to the need for AI to be broadly acknowledged in more syllabi—and at the level of individual assignments—than it was earlier this year. Based on what we heard from students last spring, we offer a few additional suggestions and practices to minimize moral ambiguity around this issue.
But first, we should answer our first question—what exactly are students doing with ChatGPT? Just about anything, it turns out, and last spring was an exciting mess because of it.
Students recounted sweeping midterm syllabus changes, blue book exam reinstitutions, rampant efforts to thwart AI detection, awkward self-policing of collaborative work and so on. But students were enamored and intrigued by ChatGPT and resoundingly planned to use it again for academic work, where permitted.
The “where permitted” part was—and remains—interesting. A few students described last spring as the “wild, wild West” of institutional AI policy. We agree. According to a March survey of 1,000 students conducted by BestColleges, just 39 percent of students reported their instructors openly discussed AI tools, and only 25 percent said their instructors or college honor codes specified how to use AI tools ethically or responsibly. In our small sample of 30 students who used AI for academic work, 30 percent received no explicit instructions from their professors either permitting or forbidding its use. We found many instances where AI policy voids were confusing or problematic.
Take, for example, a fourth-year undergraduate who reported that “two-thirds” of their section of an engineering course failed an exam because of detected unauthorized AI use, though the prohibition on AI wasn’t imposed on earlier exams of the same type that term or on identical exams in other sections of that same course. To students, situations like this make no rational sense.
The good news is that we also know from students that many faculty are already operating very intentionally in the post-ChatGPT present. There is a growing body of advice about getting familiar with AI, creating course policies and finding examples of policies.
ChatGPT seems unavoidable in any course that uses a coding language like R or Python. Much has been said about AI signaling a rethink of the traditional essay, as writing assignments are an obvious use for generative AI, but any statistics or advanced math course will invariably see more students experimenting with AI, where self-described time savings and efficiency gains were superlative. Students praised the democratizing effect of on-call and accurate code checking as a “game-changer,” with ChatGPT being a nimbler learning resource than static course materials or discussion boards.
In this sense, ChatGPT receives a lot of criticism as a potential learning avoider, but we heard countless examples of students using it as a learning enabler. Ask the directors of your campus writing center, but we suspect demands for sanctioned writing supports and tutoring services are changing. AI by no means obviates the need for human help—which students said was still situationally preferable—but its availability seems a helpful antidote to last-minute despair, especially during exam periods, when tutoring appointments may be scarce.
On that note, students should be cautioned about the behavioral slippery slope of treating AI as a “procrastination-friendly” resource. Faculty and academic advisers alike should forewarn students about overreliance or “dependence” on AI, as if it were a kind of algorithmic Adderall with attendant risks.
Preventing AI reliance is important because, currently, equitable access cannot be assumed. Some platforms were not always available to all this past spring—products like Bard rolled out selectively to Google users, while ChatGPT was prone to periodic outages. We suspect that tech companies will likely better match supply with demand going forward, but questions over access should persist wherever AI use is advantageous, allowed or required.
In terms of instilling an ethic of judicious and responsible AI use, instructors should be heartened that many students were keenly aware of the limitations of AI: namely, ChatGPT’s propensity to “hallucinate” and give incorrect or biased results. As GPTs and LLMs advance, instructors should bring attention to their dwindling limitations as a didactic opportunity to emphasize information literacy.
Other examples of how students used AI for academic work included essay drafting, scripting oral presentations, outlining slides, generating practice exam questions and composing emails. While none of those examples are necessarily groundbreaking, students chafed a bit at the discrepancy between AI use restrictions in academic work compared to how freely they use it in their nonacademic lives—on tasks as varied as recipe suggestions, gift ideas, jokes or even therapy. Descriptions of this incongruence ranged from “whatever” to “silly” to “ludicrous” to “hypocritical.”
We should also talk about AI detection, which, when instituted, did serve as a deterrent for some. Other students were brazen in their attempts to subvert detection. Many of our interviewees tried and reported doing so successfully. With LLM improvements, it’s not hard to see how tiresome it will become for academic integrity offices to litigate the veracity of increasingly indistinguishable human- and AI-generated work.
Another recommendation we draw from our research is that instructors should make expectations around AI use clear for group work and collaborative assignments.
Take, for example, a group project using a Google Doc to collaborate. A fourth-year undergraduate recounted group-generated “answers” to a question for a business class case study suddenly populating their shared document, as if by magic. “How’s that happening? Did you do the whole assignment?” they asked the suspiciously productive group member. Of course, those outputs were copied and pasted from ChatGPT, which prompted the rest of the group to convene and clarify their approach because no standard for AI use was specified for the assignment.
Some may see such episodes as unintended opportunities for students to practice self-governance, but there are risks when students are left to adjudicate peer AI use, including coercion to use AI while some groups, members or instructors may prefer they not do so. Again, clear expectations on an assignment-to-assignment basis can unburden students from confusion around their decisions about whether to use a tool ubiquitously used by peers.
Another major finding from our conversations with students is that ChatGPT was especially helpful for English language learners, one of whom described it as “a blessing.” The allure of AI is clearly stronger for international students, many of whom gained a lot from both text generation and text synthesis compared to native English speakers. International students in our sample more frequently used AI to summarize lengthy reading assignments and compose routine emails.
Additionally, we spoke with a few international students in the U.S. who relayed AI outputs to friends in other countries where, for example, ChatGPT was blocked or unavailable. This raises important access and fairness considerations for U.S.-based students studying abroad, or for information and data security if U.S. students are being co-opted (or paid) to smuggle AI-generated knowledge across international firewalls.
Lastly, we want to offer a few predictions, which—a spoiler for any faculty fence-sitters—all bend toward a kind of AI inevitability or ubiquity of student AI adoption. Of the 17 students we asked if they plan to use AI again for academic work, all 17 said yes. It’s clear to see why: they find it really useful.
Students ultimately see ChatGPT as a more efficient way to do a lot of things in and out of college, with clear examples that translate into professional careers where their success and upward mobility may not be as bound by certain ethical and learning considerations germane to academia.
For many instructors, the growing adoption of AI will require as much or more attention to pedagogy as did the pivot to remote learning during the pandemic. Over the next several years, faculty will need to continually revisit AI provisions in their assignments and syllabi—which, frankly, is going to be difficult for many. It’s going to take considerable effort, but the presence of AI in academia is inevitable.