You have /5 articles left.
Sign up for a free account or log in.
To the editor:
I’m sympathetic to the overall thrust of Steven Mintz’s argument in Inside Higher Ed, “Writing in the Age of AI Suspicion” (April 2, 2025). AI-detection programs are unreliable. To the degree that instructors rely on AI detection, they contribute to the erosion of trust between instructors and students—not a good thing. And because AI “detection” works by assessing things like the smoothness or “fluency” in writing, they implicitly invert our values: We are tempted to have higher regard for less structured or coherent writing, since it strikes us as more authentic.
Mintz’s article is potentially misleading, however. He repeatedly testifies that in testing the detection software, his and other non-AI-produced writing yielded certain scores as “percent AI generated.” For instance, he writes, “27.5 percent of a January 2019 piece … was deemed likely to contain AI-generated text.” Although the software Mintz used for this exercise (ZeroGPT) does claim to identify “how much” of the writing it flags as AI-generated, many other AI detectors (e.g., chatgptzero) indicate rather the degree of probability that the writing as a whole was written by AI. Both types of data are imperfect and problematic, but they communicate different things.
Again, Mintz’s argument is useful. But if conscientious instructors are going to take a stand against technologies on empirical or principled grounds, they will do well to demonstrate appreciation for the nuances of the various tools.