You have /5 articles left.
Sign up for a free account or log in.

I thought asking folks about setting traps to catch academic dishonesty might generate some responses. It certainly did.

A few common themes emerged.

One was the difficulty of more labor-intensive types of exams, such as oral exams, at universities where entry-level classes routinely have 300 students in a section. Worse, the instructional budgets at those places are built on the economies of scale that large sections generate; replace one section of 300 with a dozen sections of 25 each and either the labor costs skyrocket or the issue of adjunct exploitation gets even worse. Or both.

That’s true, as far as it goes. I don’t have an easy answer for maintaining the economies of scale of arena-size classes while preventing cheating. This might be where the places that run classes like that need to draw upon the expertise and experience of their own faculty: convene a group of several who’ve been around a while and who care deeply about the issue, pay them for their time, and charge them with developing some recommendations. It might make sense to convene a couple of groups: one for STEM and one for everyone else. The issues in STEM classes seem to be pretty specific, so it’s probably best to ask the folks who know those issues most closely.

Several others suggested the return of blue books. The classic in-class closed-book exam does make certain kinds of cheating harder. Combine that with randomized question orders, some open-ended questions requiring application and a series of low-stakes assignments over the semester, and the payoff for AI is reduced. I understand the impulse, and there are probably times when this is the best option. But it consumes precious class time, and it tends to favor noncontextual learning of the sort that can be simulated easily through rapid cramming followed by rapid forgetting. Also, speaking on behalf of those of us whose handwriting is best described as abstract, the idea of losing points due to idiosyncratic handwriting seems silly.

I’ll confess to a spot of hypocrisy on this one, though. My favorite technique for in-class essay exams was to give students five possible questions the week before and to tell them that four of the five would be on the test and they had to answer two. (The numbers may be slightly off, but you get the idea.) They could bring in one index card with handwritten notes. I remember a student congratulating me that I had tricked him into studying. Guilty as charged.

Still, I couldn’t help but notice that while many respondents focused on techniques, none engaged the ethical question. Is it ethically okay for professors to flood illicit websites with course disinformation to suss out cheaters? Is disinformation AI’s kryptonite?

Wise and worldly readers, what do you think?