You have /5 articles left.
Sign up for a free account or log in.

An illustration of a robotic blue chat bot head wearing a graduation cap, next to a rolled-up diploma.

Moor Studio/DigitalVision Vectors/Getty Images

Elon University and the American Association of Colleges and Universities recently released their “AI-U: A Student Guide to Navigating College in the Artificial Intelligence Era,” and there’s much to admire in it. On the practical side, the guide repeatedly emphasizes the need for students to understand the specific AI policies of each course and acknowledges the authority of individual instructors to set these guidelines.

On a broader level, the guide encourages students to use AI ethically and underscores the importance of a liberal arts education, emphasizing the value of a broad and adaptable knowledge base in preparing for the technological changes ahead. In my role as the director of Writing Across the Curriculum at a small liberal arts college, those are exactly the ideas I emphasize in my own classes and faculty development work around AI, and I’m happy to see them here.

And yet, despite its strengths, the guide ultimately exemplifies how easy it is to say all the right things while still sending the wrong message. The guide touches on the complexities of AI use but does so within a framework that prioritizes the rapid and uncritical adoption of AI over a deeper, more thoughtful approach. The epigraph from Richard Baldwin on the second page sets the tone, stating, “AI won’t take your job. It’s someone using AI who will take your job.” The message is clear: Learning to use AI is about getting ahead of your competition. Students who learn how to use it will win, and everyone else will be unemployed. That approach doesn’t engender a great deal of ethical consideration or critical thought.

And yet this message permeates the guide, often undercutting places where the authors explicitly recommend ethical consideration and critical thought. Just two pages later, students are encouraged to ensure that their work with AI assistance reflects their “own thoughts, words and tone of voice.” Yet, on the same page, the guide also advises students to use AI “to brainstorm,” “help you think about organization” and “adjust your writing style to suit your audience.”

If AI suggests the topic, writes the outline and then tweaks the final writing style, can we still reasonably consider that work to be the student’s “own thoughts, words and tone of voice”? This is a genuine dilemma, and it’s precisely the kind of question that should be front and center, not relegated to a writing checklist.

This contradiction is part of a broader, troubling trend in the plethora of AI guides and resources currently flooding higher education. Too often, the complex practical, ethical and pedagogical issues surrounding AI are brushed aside in favor of breathless promotion of its capabilities—language that often feels more like Silicon Valley marketing than thoughtful educational guidance.

To be clear, I’m not arguing against the importance of AI literacy—far from it. As a faculty member and development specialist, I’ve spent a significant amount of time encouraging both students and colleagues to engage with AI. I believe everyone in higher education needs a working understanding of what this technology can do and how it’s reshaping our work. We should be addressing AI throughout our curricula, but that means more than simply learning how to use it effectively. The rapid development of AI has created a sense of urgency, pushing us to integrate these tools quickly, often at the expense of thoughtful, meaningful discussion about the more challenging questions AI raises.

How will relying on AI for various intellectual tasks impact students’ learning? How do we address the equity issues that arise with AI, from who has access to these tools to who is represented in their outputs? What about the environmental cost of the hardware that powers AI, or the potential shifts in wealth and intellectual property that AI might cause? And what do concepts like intellectual property, originality or creativity even mean in a world with generative AI? These are precisely the kinds of questions colleges and universities are uniquely positioned to explore, and we should be engaging with them openly and rigorously.

It’s deeply problematic to ask students to engage these questions while simultaneously telling them they must use AI at every opportunity to secure their future employment. This approach risks turning students into passive consumers of technology rather than critical users. This is especially concerning because the basic techniques often presented as AI literacy, such as AI-assisted brainstorming, prompt engineering and content iteration, are not particularly difficult to learn. There’s no shortage of guides and courses available online, all reinforcing the idea that anyone can learn to use AI tools. The challenge isn’t learning to use AI—that’s the easy part.

The real challenge lies in learning to use AI critically. We need to acknowledge that AI is here to stay and that we must adapt, but we also need to question what we gain and what we lose when we integrate AI into our educational practices. This is the kind of AI literacy we should be emphasizing, and it’s a conversation that’s notably absent in much of the current discourse.

George Cusack is director of Writing Across the Curriculum and a senior lecturer in English at Carleton College.

Next Story

Written By

More from Views