You have /5 articles left.
Sign up for a free account or log in.

Cartoon image of two people arguing in profile. Each has a hand emerging from their mouth with fists that meet in an apparent clash.

Some higher ed conversations can suggest “you’re either fully embracing AI” or “you’re a dinosaur,” says DePaul University’s Daniel Stanford, who leads seminars on generative AI for faculty around the country.

Vaselena/Getty Images

Corey Robin, distinguished professor of political science at Brooklyn College and the City University of New York Graduate Center, has reluctantly considered implementing in-class exams for the first time in 30 years. He detailed the evolution of his thinking this summer in a personal blog post, “How ChatGPT Changed My Plans for the Fall.” The article did a great job conveying the sense of grief and mixed emotions many writing instructors are feeling, according to Daniel Stanford, a lecturer at DePaul University’s Jarvis College of Computing and Digital Media.

But when Stanford read the comments, he was bothered by the vitriolic tone of the debate surrounding AI’s role in teaching and learning—a tone he and other academics have also seen surface in real life.

“What’s the real harm for students who opt to cheat by using AI to write papers in passing the class?” a commenter who identified as Jason Mittell, professor of film and media culture and American studies at Middlebury College, wrote. “After 23 years of teaching, I’ve come to realize that my job is neither to police students who don’t want to learn nor to rank students via grades, but to maximize learning for those who want to learn and try to inspire the others to try to join in the learning.”

At least one person disrespectfully disagreed.

“Teachers who ask questions like yours tend to see school as just a personal development system,” a commenter who identified as “education realist” replied. “Not sure why you were able to get this far in teaching without seeing the catastrophic impact that laissez faire attitudes towards cheating have on the entire system. Teachers like you kill the system entirely, creating ever more cheaters.”

Much like early humans banded together to fend off threats by packs of wolves, some faculty members have united to fend off real or perceived threats to education by artificial intelligence. Such alliances, even when unofficial, are survival mechanisms, according to evolutionary psychologists. The divisions echo allied social groups formed during past higher ed disruptions, including the emergence of online learning and efforts to diversify literature curricula.

“AI has brought up very visceral feelings about academic integrity, whose job it is to enforce it and how it should be enforced,” Stanford said. “There’s a level of intensity to that that I haven’t seen before.”

Safety in Numbers

Humans are wired to seek connections based on shared identities, values and goals. The resulting social groups—sometimes known as tribes—can confer individual and societal benefits, according to Arash Javanbakht, director of the Stress, Trauma and Anxiety Research Clinic at Wayne State University and author of the book Afraid: Understanding the Purpose of Fear and Harnessing the Power of Anxiety.

Many people appreciate being part of a social group that is bigger than themselves. A sense of belonging activates the brain’s reward systems, which in turn fosters loyalty. Such neural activity promotes empathy and cooperation among group members.

“Tribalism is an evolutionary mechanism that helps us to survive,” Javanbakht said. The term often conjures partisan divisions, he added, but tribes can help each other. When historic floods recently struck Vermonters, for example, people from other U.S. states and regions joined together to send help.

To be sure, social groups may disagree as part of healthy discourse. But when a social group feels that its identity, values or goals are under threat, its members may regress to more primitive, aggressive states, Javanbakht said.

“You can take the person out of the Stone Age, but you can’t take the Stone Age out of the person,” Nigel Nicholson, emeritus professor of organizational behavior at the London Business School, wrote about evolutionary psychology in Harvard Business Review. (Of course, modern humans have more opportunity than early humans to broadcast hate, given the internet. But in an ironic twist, researchers at Cornell University are exploring how AI may help nudge people toward civility in online discussions.)

Faculty uncertainty and anxiety surrounding the role of artificial intelligence in teaching and learning are high, which may nudge them into oppositional, values-based social groups, Javanbakht said.

“There’s a lot of fear,” said an English faculty member at a Midwestern college who spoke about the sensitive situation on the condition of anonymity. (The individual, who leads a group focused on AI in education, has been targeted for their views on this topic.) “Many say it’s naïve to think that we can create a feeling of trust in our classroom, so that our students will actually do what we ask them to do in the way that we ask them do it. I get that if you’re not in my classrooms, maybe you think that’s naïve. But I know that this works.” For this faculty member, the tensions are reminiscent of polarization surrounding discussions of teaching diverse voices in literature courses.

Anxiety had been running high among faculty before the rise of generative AI.

“There’s already job insecurity,” Stanford said. “There are compensation challenges. There’re moves to unionize. There’s the adjunctification of higher ed.” Now, generative AI tools have left some academics asking, “Do people still value my subject area, my expertise, my discipline? Do people still value learning this in the way I think it should be taught?”

‘Fully Embrace AI’ or ‘You’re a Dinosaur’

Stephanie Laggini Fiore, associate vice provost and senior director of the Center for the Advancement of Teaching at Temple University, has seen a “mild version” of divisions over AI’s role in teaching and learning at her institution. Some are willing to question or change their practices. But others, driven by what she presumes is grief or feeling overwhelmed, resist and see differences in stark terms.

“It can be ‘your way of thinking is backward’ or ‘you’re not moving with the times,’ whereas others turn around and say, ‘you’re allowing students to do whatever they want in your class, and that erodes our standards,’” Fiore said, adding that tense divisions among faculty are not unique to generative AI. Temple, for example, has long offered professional development on evidence-based teaching practices. In the past, some faculty have argued that new practices are inappropriate for specific disciplines, are not rigorous or are not student-centered.

“We need to experiment and give ourselves and others grace to make mistakes and to come back and try again,” Fiore said. “We aren’t at a place yet where we really understand what’s going to replace … assignments that are vulnerable to the [AI] tools.”

Sometimes, administrators fuel the divisions. Recently, Stanford, who leads faculty development seminars on generative AI around the country, observed a college leader who left little room for faculty members to express concerns.

“There was very much this tone of ‘you’re either ready for the change and fully embracing it’ or ‘you’re a dinosaur,’” Stanford said. “That has a really detrimental effect, because you’re accusing people of not caring about students.”

The divide reminds Stanford of his earlier work helping instructors learn strategies for online teaching. Then, some instructors saw possibility while others felt it compromised all they held dear about teaching in their discipline.

“We’ve done such a bad job with change management in higher ed,” Stanford said. “Why does a policy conversation about AI in your syllabus turn so quickly into ‘you don’t care about cheating!’?”

Instructors who encourage students to use AI to brainstorm or assist with early drafts often argue that the technology fosters innovation in teaching and promotes access in learning. But others say that such use bypasses writing assignment goals.

“It’s not just that some faculty members disagree with the idea of using AI, but they think that anybody who is choosing to do AI literacy in their classes or help their students to understand how to use AI as a tool is part of the problem,” the Midwestern English faculty member said.

Bridging AI Divides

Humans cannot override their innate drive to bond over shared attributes. But they can mingle with those from other groups to foster more connections. In the process, new social groups may emerge.

“It’s hard to hate people you know,” Javanbakht said. “Don’t avoid this [AI] conversation. The less unknown, the less scary.”

Academics are accustomed to guiding their work with scholarship. But research on teaching and learning with generative AI tools is in its infancy. Nonetheless, some have adapted by taking a holistic approach.

Research around academic integrity and student motivation can inform AI teaching practices, Fiore said. When students understand why they’re doing a particular assignment, they are more motivated to do it well. Also, faculty members can draw from research on building positive relationships with students. Students will “go the extra mile,” Fiore added, for a faculty member with whom they’ve built a relationship.

But before instructors overhaul their courses or redesign assignments to incorporate generative AI tools, many first need to voice concerns without judgment.

“Faculty have been in crisis mode for years,” Stanford said. “They’re exhausted.”

During the pandemic, many institutions reminded faculty members not to forget self-care, but opportunities to act on that message were often inconsistent, Stanford said. At the same time, not everyone responds to programming designed to foster human connections.

“When I’ve tried a more touchy-feely approach to faculty development, I’ve had some folks come back to me and say, ‘You’re misreading my frustration as me needing a hug or me feeling unsure of myself, and that’s not it. Don’t try to hug me. I know what I’m doing. I just need you to tell me how to use this tool and do the thing my administrators are asking me to do … because what you’re asking me to do right now, I don’t agree with. I feel deep in my core that it’s wrong,’” Stanford said.

Some resist the technology because they are angry about algorithmic bias. Or they are concerned about encouraging students to surrender personal data to tech companies. Or they deem the tech inappropriate for their discipline.

“You have to hear those folks out, too, and have a plan for how to address [concerns on] both sides,” Stanford said. He’d like to see more investment in faculty learning communities and retreat-like events in which faculty work through serious teaching challenges but also set aside time for human connection and self-care.

“But this type of faculty development is challenging to get funded because it requires us to value human connection, belonging and emotional support as much as we value a learning outcome like, ‘redesign writing assignment X to address that new AI tool everyone is talking about.’”

Next Story

More from Artificial Intelligence