You have /5 articles left.
Sign up for a free account or log in.
nadia_bormotova/iStock/Getty Images Plus
Imagine you’re a 19-year-old college sophomore. Each time you’re given a writing assignment, you weigh your options for using AI to complete it.
Your professors have AI policies, but they all seem a little naïve. One says you can use it for brainstorming and researching but not actual writing. But when you ask ChatGPT, Perplexity or any other tool to jump-start your research, it gives you ready-made content for your paper. Are you supposed to ignore it?
Another professor says to use AI however you want, as long as you describe what you did in a short paragraph at the end of your paper—an “acknowledgments section,” he calls it. But that still leaves you with the decision of what to do and how much to disclose.
Another professor forbids AI entirely, but how are you supposed to avoid technology that’s embedded in every writing platform?
Besides, you’ve done enough sleuthing to know that AI detectors aren’t reliable anyway. You doubt professors can even enforce these policies, so they’re not a big factor in your decision-making.
Instead, you think about how much you like the class the paper is assigned in, or whether it’s in your major. These are papers you might actually enjoy writing, or ones that could help you develop a skill you’ll need later.
Then again, you’re not sold on your major, and you’re not totally sure what profession you’ll land in, much less what knowledge and skills you’ll need once you get there. So it’s possible you could choose wrong and finish your classes without skills you’ll need. It’s also possible you could buy an extra three hours that would really help relieve all the stress you’re feeling.
In these moments, you consider this strange new reality of powerful technology and unenforceable rules. At the end of the day, after all the lectures and policies, it seems like the weight of upholding them falls almost entirely on you. It’s empowering in a way, but you can’t help wondering if this is something students should be responsible for.
There’s an unwieldy but useful word for what’s happening in this scenario: responsibilization. It’s a term used by political theorists and government policy researchers to describe a trend in capitalist countries: Tasks once handled by the state get handed down to private companies or individuals. This can be a positive thing, but when individuals are given responsibilities without resources, their lives can become more difficult. I’m going to argue that this is one effect of generative AI: Where upholding the academic integrity of college classes has always been the shared responsibility of teachers and students, students now bear the lion’s share of it. Expecting them to bear this burden is neither reasonable nor fair.
There are three big reasons for this new reality: the easy availability of AI writing tools, the marketing of these tools to students as a way of reducing or avoiding coursework and the inability of teachers or detectors to reliably discern whether text is student-authored or AI-generated. To be clear, I’m not suggesting that all uses of AI amount to cheating, that it is never beneficial or that professors’ AI policies are pointless. We need such policies to clarify best practices for students, and there are many good resources for “AI-proofing” assignments. But all such strategies, short of in-person, pen-and-paper writing, arrive at the same place: The final decision on whether to complete one’s coursework or substantially outsource it rests with the student.
Some might say this has always been so. But no previous generation has been faced with the ever-present option to offload their work, at no cost, with a low likelihood of immediate negative consequences. Some might say, as I have, that students are responsible for their own learning. But responsibilities can be burdensome and must be understood in contexts beyond the classroom. The literature on responsibilization can help.
Wendy Brown, an influential theorist of neoliberalism, defines responsibilization as “the moral burdening of the entity at the end of the pipeline [of power and authority]. Responsibilization tasks the worker, student, consumer, or indigent person with discerning and undertaking the correct strategies of self-investment and entrepreneurship for thriving and surviving,” all while not providing or actively removing a social safety net. Before widespread AI, U.S. college students were already responsibilized citizens burdened with costs not shouldered by their peers in other countries, like expensive tuition and, for many, expensive childcare. The cost of tuition alone is enough to make U.S. students see a college education purely as an investment in their financial future, putting enormous emphasis on grades and de-emphasizing the process-oriented nature of learning.
What generative AI does, then, is intensify the responsibilization of college students, who do not have the resources necessary to exercise these responsibilities in a way likely to benefit them (and society) more broadly. As they try to decide which assignments are worth doing and which can be safely outsourced, they are essentially tasked with discerning what knowledge and which skills they will need in their future roles as employees and citizens. These are not questions we should expect them to know the answers to.
Our college sophomore in the opening scenario is a serious student. She sees the value of a college education for herself, and she thinks a society with more educated citizens is probably better off. But she sees lots of her peers relying on AI pretty regularly—around 33 percent of them, if her experiences line up with recent student surveys. She thinks there might be learning benefits for certain uses, but she also worries that it might weaken her creativity or critical thinking skills, or spit out bad or biased information.
And even if she could sort through all this, the new combination of AI saturation and unenforceable AI policies still makes her the ultimate arbiter of academic integrity in a way no previous generation of students has been. Curriculum committees can design pathways through the institution based on their knowledge of what students will need and what society will need from them. Professors can argue that the skills they teach are essential, that proceeding through assignments in a particular way is most beneficial, that using AI this way but not that way is what she must do. At the end of the day, though, when it’s just her, her computer and a deadline, all those things aren’t much more than suggestions. It’s really up to her which skills she hones and which ones she outsources. She is responsible for her own learning.
In my view, there are two things to be done in this situation. First, explain and empathize. Writing-intensive courses are often great spaces for discussion, and it’s not difficult to get students talking about their many obligations and the obstacles they pose to their education in ways that aren’t too personal or too technical. The many ways in which they, as Americans in college, are responsibilized, and how generative AI adds to that load is a worthy and timely topic. And even as we understandably fret over academic integrity, we can do so with empathy for students forced into a new reality fraught with blurry lines and impossible choices.
The second thing we can do is provide AI-free spaces for portions of students’ composing processes. The only truly AI-free spaces are internet-disabled computers and in-person pen-and-paper assignments, which are seen by many teachers as punitive, as they certainly can be. But responsibilization reframes our current predicament, where the line between supplementing your work with AI and offloading your work to AI is blurry, and the ever-present option to do the latter confers new and burdensome choices to be made over and over again. But not in AI-free spaces.
It is possible to see such spaces as freeing rather than constricting, especially when we imbue them with freedom. We can do this by de-emphasizing evaluation of sentence-level concerns and instead valuing the unique humanness of students’ developing ideas and voices. We can also create these spaces for process-oriented lower-stakes assignments rather than for the big ones. I have been doing this in my own classes, and while students complain about having to hand-write things and worry they won’t do their best work, they relax somewhat when I make clear that I don’t want their “best work” in the way they define it. I want their own voice and their developing ideas, both of which can be—even should be—a little messy. This has the added benefit of getting me familiar with writing that is unquestionably theirs, so if future assignments depart radically from what I’ve seen, we can have a conversation about academic integrity.
The goal here is not to forbid AI use, lay a trap or create perfect accountability, but rather to reclaim some responsibility for the academic integrity of our classes. Just as we don’t want students offloading too much of their work onto machines, we should be careful about offloading our responsibilities onto students.