You have /5 articles left.
Sign up for a free account or log in.

After reading the umpteenth profile of ChatGPT, I decided to test it out this week. I asked it how many vice presidents of the United States later became presidents. It said nine.  

The correct answer is 16. It’s slightly over a third of the total number of presidents.

It listed the nine it claimed: Adams, Jefferson, Van Buren, Tyler, Fillmore, Pierce, (Andrew) Johnson, Arthur and (Theodore) Roosevelt. It left out Coolidge, Truman, (Lyndon) Johnson, Nixon, Ford, George H. W. Bush and, notably, Biden.

I was surprised. This seems like the sort of question that an AI engine should be able to handle easily enough. It’s factual, verifiable and based on very public information. Say what you want about Joe Biden, but the fact that he served as Obama’s vice president is not in dispute. We have pretty good records on that sort of thing.

At one level, the blatant mistake was encouraging. A student who relied solely on ChatGPT to answer that question would be easy enough to catch. And the omissions didn’t seem to have a partisan leaning; the relevant factor seemed to be whether they served in the last 100 years. (One could argue, I suppose, that it showed bias in leaving out Democrats and Republicans but including all of the relevant Federalists and Whigs. Certain techies are fond of a sort of Whiggish history …)

On another level, though, it’s pretty disturbing. The mistake is so basic that I have to wonder why the algorithm made it. If it gets a simple, factual question so badly wrong, what else is it getting wrong?

The emergence of generative AI tools poses a real challenge to certain kinds of academic assignments. How much can a student rely on an AI engine before crossing the line into plagiarism? Absent a really glaring mistake, like leaving out Joe Biden, how do you even know if an AI engine wrote the paper? Some students have gone public (or quasi public) with bragging about how they’ve used AI to avoid doing work. Students who take a transactional view of assignments may welcome AI as a time-saver.

So far, at least in my impression, those discussions have largely been happening on the individual faculty or department level. At higher levels, I’ve seen a vague acknowledgement that something is afoot, but I haven’t seen much in the way of substantive discussion of what to do about it.

I suspect that’s because it potentially cuts to the heart of how many classes are organized. Out-of-class assignments allow for more productive use of precious class time and for different styles of work. But if fear of AI—whether justified or not—becomes too strong, then we can expect to see many classes return to in-class assignments as a defensive maneuver. (Of course, that’s difficult in really large classes.)

Alternately, some forward-looking types may look at AI engines as tools that will become ubiquitous and will conclude that the task at hand is to equip students with the skills to use them intelligently. That requires a very different way of teaching, a different set of assignments and possibly a different schedule. It’s much more drastic than, say, the inclusion of calculators in math classes.

Wise and worldly readers, have you seen or had in-depth, campuswide discussions on the implications of ChatGPT and its variants? If so, are there any insights you could share? Our collective memory of Joe Biden may hang in the balance.

Next Story

Written By

More from Confessions of a Community College Dean