You have /5 articles left.
Sign up for a free account or log in.

So the University of Michigan is going to be experimenting with a “text-analysis” tool added into their existing M-Write digital/algorithmic writing pedagogy initiative, which is meant to integrate more writing into large courses, particularly in the sciences. 

The goal is to create a “feedback loop” where the algorithm searches out key words and concepts within responses to “pre-programmed prompts,” the algorithm having been trained utilizing previous student answers to deliver a “predicted score.” The algorithm will direct students who may need extra help to writing fellows (former students who did well) for one-on-one assistance.

Regular readers who have noticed my tech-skeptic nature may think I would be against this initiative, but I actually support these sorts of experiments because they are overseen by academics who are motivated by a drive to improve student learning. The laboratories of education technology are rarely the problem. Only when the technology escapes the lab and lands in the world of commerce with its primary values of efficiency and profit do we see trouble.[1]

And yet, I of course have questions and concerns that are raised by this kind of experiment, even as I wish the University of Michigan and M-Write well.

For example, the M-Write program exists inside a context where the large lecture is an unassailable fact of nature. Rather than putting resources towards seeking ways to make class sizes smaller – which we know works for education – the effort is put into automating aspects of the large class experience in ways the will allow humans to intervene where “needed.”

I am disconcerted by an educational model where students primarily receive attention when they’re “struggling.” This suggests a framework where the goal of education is simply to stay off the algorithm’s radar, rather than maximize each student’s potential.[2]

I am also concerned with the notion that more writing is better for students, period, end of story.

This is simply not true. Writing outside of a rhetorical situation without purpose and audience doesn’t really do much to enhance subject learning – the focus of M-Write – or develop writing skills. Where a writing prompt or assignment is primarily meant to measure recall or comprehension, assigning writing – particularly writing at length – over using a different assessment instrument isn’t necessarily an advantage. If an algorithm can assess the piece of writing, I question the use of writing for the assessment.

In fact, one of the first things I do when working with non-English faculty on designing writing assignments is to make sure what they’re planning actually lends itself to writing. To be worthwhile, writing needs to tackle a problem requiring the writer to not just recall information, but also use knowledge in order to create more knowledge. If the question doesn’t involve analysis or synthesis, there’s likely a more appropriate assessment tool available.

Those possible quibbles aside, I think there is a larger concept to – as we say in academia – “unpack,” this idea of the “feedback loop.”

Focusing on the “feedback loop” gets a lot right about writing, which is a recursive process that relies heavily on revision. Upon receiving feedback, the writer seeks to incorporate this feedback into the next iteration, in theory, improving the writing each time you go through the loop.

In the classroom with a single instructor, we soon bump up against the our very human limits to provide meaningful feedback. Time only allows for so much. Peer feedback fills some of this hole, and yet peer feedback is undeniably different than instructor feedback.

Feedback is also usually about performance and grades. I find talking about the grade during the writing process generally unhelpful. Students should be focused on something other than “How do I get an A?”

In my courses, rather than peer feedback, I use “peer response,” I ask students not to assume the role of instructor (Because how could they?) but to stand in for the intended audience for the writing and to react and respond as they might. This is done entirely through questions posed about the piece of writing that the student answers regarding their peer’s work, but more importantly, they can also answer them about their own work.

Properly used, peer response is not an exercise for the writer to “fix” their piece according to guidance from the peer. Instead, a peer response is an opportunity to engage in what we know is perhaps the most important skill to practice for developing writers and thinkers, “critical reflection.”

The M-Write algorithm/rubric is only able to detect what the student is thinking, and in a meaningful written response, particularly during the drafting process, how the student (metacognition) is thinking is much more important.

An algorithmic score that predicts your grade does not orient students towards more productive reflection. Even if they are then directed toward a writing fellow, the discussion remains at the level of how to get a better grade which runs this risk of devolving into pleasing the rubric/algorithm.[3]

By privileging a narrow, predicted, and predictable range of outcomes, these sorts of initiatives distort the writing and thinking process as we wish to see it practiced in the wider world where rubrics don’t exit.

Students who are confined to large lectures will forever have to perform well inside the box, even as the box is enhanced with digital tools. This is no different than K-12 students who are subjected to the kinds of standardized multiple-choice assessments that the M-Write algorithms and tools are meant to substitute for.

Right back where we started.

Rather than creating a “feedback loop” centering student learning around an algorithm, we should be seeking ways to empower students to make discoveries.

Now some of these discoveries may be ideas others have already “discovered.” Much of my writing about writing involves conceiving of something I think is “new” and discovering it is in reality quite well-established, but by uncovering these ideas for myself, they take far greater hold in my existing knowledge than if I’d been fed them by another. They also fuel an intrinsic drive that is far more potent and longer lasting than any external motivator.

This is why I prefer a writing process for students that privileges critical reflection over the kind of feedback loop M-Write appears to be working towards. Rather than acculturating students to a system that values scores and triggers an instructor (or tutor) intervention, critical reflection requires the students themselves to recognize when their own thinking is off track and seek remedy.

Critical reflection also scales at least as well as what M-Write is pursuing. In the end, you always need a well-trained instructor to help when needed, but M-Write already recognizes this fact.

On the whole, M-Write looks like an initiative that’s oriented around achievement (grades), but I’m not convinced that achievement and learning are synonyms in this context.


[1] My big picture worry is that this cycle is inevitable. The second a technology shows even limited promise (MOOCs…cough…cough), the monetizers promising disruption and the University of Everywhere arrive to ruin a good thing.

[2] It’s not lost on my that flying below the radar was my goal for my own large university educational experience, but this is exactly why I am troubled by enshrining such values in the digital university. It wasn’t a particularly good way to do college.

[3] The very best answers to a writing prompt, meaning the answers that both introduce new knowledge and provide a vehicle for the student to learn as much as possible, will always defy these algorithms because they are at least a little outside what’s been predicted.

Next Story

Written By