You have /5 articles left.
Sign up for a free account or log in.

Writing here at Inside Higher Ed, Ray Schroeder argues that “it is our urgent responsibility to teach students how to use [AI] in their discipline.”

I agree, but I also found the proposal for what we’re supposed to do following the opening call to arms rather murky and feel like some of the claims about the future of the workplace and higher education’s role in preparing students for those jobs could use some additional interrogating.

Here are some questions I think we should be grappling with in the context of institutional responsibility to teach students how to use AI in their discipline.

How certain are we that AI is actually going to be useful?

I understand there is significant enthusiasm about the potential increases in productivity afforded by the integration of generative AI tools into the workplace, but as of yet, we have no definitive evidence in what industries or activities this technology is a difference maker. In fact, a recent survey of full-time employees by Upwork found that over three-quarters of respondents say “these tools have actually decreased their productivity” (emphasis mine).

We may also be looking at a temporary bubble when it comes to generative AI. Tech observer Ed Zitron suggests that the pace of spending at OpenAI coupled with the extremely limited exiting revenue may be an actual existential threat to the company.

Goldman Sachs issued a June report titled “Gen AI: Too Much Spend, Too Little Benefit?” that threw significant cold water on the nascent AI revolution as a significant disrupter in the business status quo.

It seems undeniable to me that AI is here to stay in some form, but each day, week and month that passes without a tangible, transformative use case suggests that it may not be as revolutionary as it may have once seemed.

Should we be eager to retool at the program and curriculum level for something that is, at this time, unproven? Am I the only one who remembers the fad of MOOCs, or that everyone should learn to code, or that everyone getting a STEM degree would be transformative?

What does teaching students to use AI look like, in concrete terms?

For the most part, Schroeder talks in terms of undefined “generative AI skills.” The only specific skill given any mention is “research,” which he says is “often the most important to those who use the tool.” But what are we supposed to teach students about generative AI and research?

Schroeder describes generative AI tools as employed for research working like this:

“The impressive ability to synthesize information, draw reasoned conclusions and point to other sources of information that may add clarity to the topic that is under study makes this technology stand out from common indexes and search engines.”

I do not mean to be unkind, but there are many flatly incorrect statements here about how generative AI functions. Large language models do not “synthesize” in the way we think of the word in research terms. They select information according to the token prediction algorithms at work in the model.

LLMs do not draw “reasoned conclusions” because there is no process of reasoning as part of these models. They are famously incapable of discerning truth from falsehood, a foundational aspect of reasoning.

It is true that generative AI tools can surface information that may not be as accessible through existing indexes and search engines, but this does not make it inherently better or more powerful. It is merely different. For sure, understanding these differences and how and when one tool is more or less suitable than another would be a good thing to teach students, but as tools of research, they are, in many ways, incompatible with the values we expect students to bring to the research we do in academia.

At the top of my list in achieving that goal is making sure students understand that cannot rely on generative AI to do research because its very design means it will make stuff up.

So, what are the skills we should be teaching?

In his piece, Schroeder links to a Times Higher Education report on “Getting Workplace Ready” that explores the skills some believe are going to be useful in working with generative AI tools.

What are the skills that report suggests we focus on to prepare students for a “future we can’t yet imagine”?

  • Creative thinking
  • Analytical thinking
  • Technological literacy
  • Curiosity and lifelong learning
  • Resilience, flexibility and agility
  • Systems thinking: viewing entities as a connected, mutually interacting parts of a larger whole
  • AI and big data: working with sets of information that are too large or too complex to handle, analyze or use with standard methods
  • Motivation and self-awareness
  • Leadership
  • Empathy

As the report says, these are so-called soft or nontechnical skills, the kinds of skills that would transfer across many different domains rather than being AI-specific.

I would not necessarily claim that institutions are doing a great job at teaching these skills, but they strike me as a list of known knowns in terms of the kinds of learning that are most useful for students.

None of this is new.

Shouldn’t we be teaching an adaptation mindset rather than AI skills?

I finished graduate school in 1997, just as the productivity tools of personal computing (like the MS Office Suite) and the internet arrived in a way that transformed the kinds of products we produced, how those products were produced and the speed at which they were produced and disseminated.

I recall having precisely zero difficulties making this transition.

When I started my job as a trainee and then assistant project manager at the marketing research firm Leo J. Shapiro & Associates, I had never heard of PowerPoint. Within days I was producing massive slide decks filled with graphs, tables and text. A couple of years later, I was chosen to be the person who learned how to program a computer-based (as opposed to paper and pencil) survey on a new piece of software. I had no difficulty learning this skill.

What did I learn in graduate school that transferred to the kinds of skills I’d need to move up the ladder at a marketing research firm in the midst of a transition into the internet era? How to do a poetry explication of a Gerard Manley Hopkins sonnet.

I was taught how to think critically, to analyze audience and intent, to communicate clearly. These will never go out of fashion.

Now, the use of generative AI tools may prove more complicated than the transition people of my generation lived through, but my transition was served well by being educated, rather than trained.

I say this often in regard to generative AI, but it’s worth repeating: Prior to the arrival of ChatGPT in November 2022, very few people had any hands-on experience in interacting with and using large language models. The people who are using them productively today are not trained in the specifics of generative AI but in ways of thinking that allow one to make use of the tool as an aid to the human work, rather than outsourcing our thinking to something that does not actually think or reason.

Schroeder’s framing of the challenge of developing AI “skills” is far too narrow. I frame how I believe we should be considering the teaching and learning challenges around AI as questions because I think it would be a mistake to suggest that we are standing on solid ground in regard to what this technology means in our work and our lives.

We need to do our best to make sure graduates can be agents in the world, not servants to the technology.

Next Story

Written By