You have /5 articles left.
Sign up for a free account or log in.

Half human, half robot hand writing with a pen

Although Grammarly is an editing tool that’s been around since 2009, its new generative AI–powered capabilities are raising academic integrity questions.

Photo illustration by Justin Morrison/Inside Higher Ed | Gazanfer and InspirationGP/iStock/Getty Images

The University of Notre Dame’s decision this fall to allow professors to ban students from using the 15-year-old editing software Grammarly is raising questions about how to create artificial intelligence policies that uphold academic integrity while also embracing new technology.

Since it launched in 2009, millions of college students have used Grammarly’s suggestions to make their writing clearer, cleaner and more effective. In fact, many of their professors have encouraged it, and more than 3,000 colleges and universities have institutional accounts, Grammarly says.

But like so many types of software, the advent of generative AI has transformed Grammarly’s capabilities in recent years. It now offers an AI assistance component that “provides the ability to quickly compose, rewrite, ideate, and reply,” according to the company’s website.

While many students have welcomed those enhancements to help them write research papers, lab reports and personal essays, some professors are increasingly concerned that generative AI has morphed the editing tool into a full-fledged cheating tool. Those concerns prompted Notre Dame officials to rethink their AI policy.

“Over the past year, [academic integrity] questions about Grammarly kept surfacing. Professors would contact me and say, ‘This piece of writing looks so completely different from everything else I received from this student so far. Do you think they used generative AI to create it?’” said Ardea Russo, director of Notre Dame’s Office of Academic Standards. “We would look at it more deeply and the student would say, ‘I used Grammarly.’”

Notre Dame first developed its AI policy for students in August 2023, leaving it up to individual professors to decide if students are—or aren’t—allowed to use generative AI to help complete assignments.

“… Representing work that you did not produce as your own, including work generated or materially modified by AI, constitutes academic dishonesty,” stated the policy, though it didn’t name any specific programs. “Use of generative AI in a way that violates an instructor’s articulated policy, or using it to complete coursework in a way not expressly permitted by the faculty member, will be considered a violation of the Honor Code.”

To avoid further confusion about Grammarly—which many students were accustomed to using—Notre Dame updated its policy this August to clarify that because “AI-powered editing tools like Grammarly and WordTune work by using AI to suggest revisions to your writing,” if an instructor “prohibits the use of gen AI on an assignment or in a class, this prohibition includes the use of editing tools, unless explicitly stated otherwise.”

“It’s hard because faculty are all over the place. Some want students to use it all the time for everything and others don’t want students to touch it all,” Russo said. “We’re trying to thread the needle and create something that works for everyone.”

Notre Dame isn’t the first university to question the academic integrity of using Grammarly. Earlier this year, a student at the University of North Georgia was put on academic probation after she used Grammarly to proofread her paper, which got flagged by an AI-detection system.

‘Wild West’ of University AI Policies

Navigating what learning should look like in the two years since ChatGPT pushed the term “generative AI” into higher ed’s popular vocabulary is a problem that doesn’t yet have clear, uniform solutions across academia. Three in 10 students aren’t even sure when they’re permitted to use generative artificial intelligence in their coursework, according to an Inside Higher Ed survey published earlier this year.

“Right now, we’re basically living in the Wild West,” said Damian Zurro, a writing professor at Notre Dame who allows his students to utilize AI-powered tools, including Grammarly. And in a national higher education landscape where students are still far more likely than their instructors to use AI, he believes students taking a course load with scattered AI policies may be counterproductive.

It’s part of the reason why he “doesn’t love” the new policy.

“It creates this fracture, which makes it very difficult for students to navigate,” Zurro said. “My hope is that we one day get to some sort of general curiosity where there could be broader guidelines or norms that make things easier for students, but right now we’re just in this difficult environment where students are cross-pressured depending on which class they’re in.”

Emily Pannunzio, a sophomore biology major at Notre Dame, is experiencing that confusion firsthand.

Having professors who are transparent about how they want students to use AI “encourages students a lot more to only use it how they’ve instructed,” she said. “In other courses, if they don’t say anything about AI, I think students are a lot more likely to use it in various ways.”

Pannunzio hasn’t personally utilized the full extent of Grammarly’s generative AI capabilities, but she worries about the implications of banning the program altogether.

“I see where they’re coming from, but on the other hand, that would mean you also have to ban things like peer reviews and Thesaurus.com,” she said. “That just starts to become too much and hinders students’ abilities to use these tools. In the workplace and the real world, you’re never going to be banned from using these tools.”

While access to generative AI tools, such as those now embedded in Grammarly, are important aspects of modern career preparations, it shouldn’t be the only guide for universities trying to craft a nuanced AI policy, said Nathaniel Myers, a writing professor at Notre Dame.

Depending on the audience they’re writing for, he allows students to use generative AI tools for some assignments—such as those written in a professional tone—and not others, such as a personal essay. And he makes those decisions based on how it aligns with the learning goals of a particular exercise.

“I want them to undergo the writing process for themselves in ways that aren’t immediately turning to a tool that’s helping them write, because there’s value in the friction that’s a part of that work and learning that happens in writing without those tools,” Myers said. “On the flip side, I want them to have the rhetorical knowledge and the skills to navigate these tools in ways they may need for their professional lives and other parts of their lives as well.”

As for the AI policy itself, he believes both students and their instructors need a mutual understanding for it to work.

“The idea that we’re putting accountability on students to know what those policies are is a tricky thing,” he said. “It’s also on the professors to be clear and transparent about what those policies are. Sometimes I worry that’s not always happening, either.”

‘Something Deeper’

Despite its perceived flaws, simply having an AI policy at all puts Notre Dame ahead of most of its peers: 81 percent of college presidents said their institutions haven’t published an AI governance policy, according to Inside Higher Ed’s 2024 presidents’ survey; another Inside Higher Ed survey showed that just 9 percent of chief technology officers believe higher education is prepared to handle AI’s rise.

But not addressing generative AI at all isn’t a solution, said Marc Watkins, assistant director of academic innovation at the University of Mississippi.

“It’s not just a pro-adoption policy or a ban,” he said. “There’s something deeper going on here, where a lot of faculty don’t feel like they have the bandwidth or support to make these judgments yet, and that’s really concerning.”

And if institutions expect to be environments where scholars grapple with the more existential questions generative AI poses—such as the value of art or writing produced by a machine—they’ll have to start by acknowledging that AI is changing education.

“One thing I don’t think we can do is sit back and say nothing,” Watkins said. “We really have to go back to the bare bones of this and see what support faculty need to make these judgments about where they want to adopt AI or maybe where they want to refuse it.”

Next Story

Written By

More from Artificial Intelligence