You have /5 articles left.
Sign up for a free account or log in.

A little over 10 years ago in this space, I wrote a manifesto in the form of a list against the use of algorithmic grading of human writing.

The occasion was the announcement that MOOC provider edX had created what they were calling a “workable model” for automated grading. I was skeptical about the model’s effectiveness—with good reason, since it was never rolled out on a broad basis—but the appearance of the current generation of generative AI tools has, to some degree, demolished that claim. AI models are definitely capable of delivering plausible feedback on writing, particularly when they are carefully prompted around the criteria by which the writing is to be judged.

And yet, I’m here today to renew my objection to the automated grading of student writing, and my objection is rooted in the culminating point in my list.

“The purpose of writing is to communicate with an audience. In good conscience, we cannot ask students to write something that will not be read. If we cross this threshold, we may as well simply give up on education. I know that I won’t be involved. Let the software ‘talk’ to software. Leave me out of it.”

In my most recent previous post, I wrote about how producing the syllabus for one’s course is the among the most important work that can be done as an instructor. To outsource it to AI is to willingly cede some portion of your humanity.

I believe this even more strongly about the work of engaging with student writing. Reading and responding to their writing is the job. It’s a line that people who believe in the importance of writing, critical thinking and communication should not cross, no matter how proficient or efficient the AI seems.

That people have been amazed that ChatGPT could turn out plausible student essays is not a testament to the advanced nature of the technology. It is an indicator of the cramped nature of how we ask students to engage with writing in school contexts.

Similarly, if it appears that the AI can do the job of feedback as well as the human instructor, this is a sign that the either the assignment, or more likely, what’s valued in terms of assessment, should be reimagined.

It is going to take some time to fully understand the implications of this technology, to train faculty and students on its use and limitations and consider the full ethical and moral dimensions of its potential integration—time the corporations will not give us as they rush for gold.

For that reason, some lines must be drawn and held to while we figure these things out.

Large language models cannot read. They cannot think. They cannot feel. They do not communicate in the way humans communicate.

It is simply wrong to ask student to produce writing that will be fed through the AI without any engagement with a human on the other end.

It’s not a slippery slope, it’s the bottom of the hill itself.

Next Story

Written By