You have /5 articles left.
Sign up for a free account or log in.

Should someone’s sentence for a crime be based on the risk of them committing another crime in the future?

What if that calculation of that risk was “data-driven,” the risk determined based on the defendant’s age, education, employment and educational status, finances, neighborhood, and family history?

Would it surprise you that we’re not talking about a Philip K. Dick novel, but a practice that’s already at work in 20 or more states?

I was surprised, at least. I learned about the practice in a New York Times op-ed written by University of Michigan Law Professor Sonja B. Starr in support of a Justice Department letter criticizing the rise of so-called “evidence-based” sentencing.

Born out of the cultural fascination with Moneyball, Michael Lewis’ study of how data science helped the Oakland A’s field a competitive baseball team on the cheap, the supporters of the movement claim it improves the “accuracy” of pre-sentencing hearings, which have been historically overseen by judges, and are open to their subjective judgment within certain guidelines set by the law.

These judgments, subjective as they inevitably are, have been deemed “flawed.”

But in some cases “evidence-based” sentencing may mean a literal visiting of the sins of the father upon the sons, as a family history of incarceration may be one of the criteria considered. Biblical, perhaps, but most definitely not constitutional, as Professor Starr argues[1]. Basing one’s risk of recidivism on socioeconomic and demographic factors entirely out of the individual’s control is a guarantee of the perpetuation of an unjust society and flies in the face of our most basic founding principles, “all men, created equal.”

As the algorithm deems more and more people should be locked up for longer sentences, and the crime rate drops, its supporters can claim victory.

Ultimately, if left unchecked, the algorithm will achieve a kind of “perfection” “improving” itself with each iteration, finding more and more risk factors – childhood diet, sodium level, maybe even DNA - until it runs out of people to lock up.

Researchers at Samford University’s Cumberland School of Law demonstrated what happens when you leave law enforcement to robots. As reported by Wired, Co-author Woodrow Hartzog and his fellow researchers asked, “52 different coders to create a program that would issue speeding tickets based on a sensor placed within a car.”

Some programmers were told to follow the spirit of the law, others “the letter.” “Programs that followed the letter of the law ended up issuing as many as 1,000 tickets for a single car trip.”

It turns out that “human judgments like mercy and compassion,” play an indispensable role.

Reading Professor Starr’s op-ed and the article on Professor Hartzog’s research brought me back to some thoughts I had earlier in the summer about Purdue’s Course Signals software, a program designed to alert instructors and students when they are at risk of doing poorly in a course.

I expressed some concerns about Course Signals and automated advising programs such as ASU’s eAdvisor which creates a “personalized” degree path for students based on identifying majors where they can “succeed and graduate on time.”

What success means in this context isn’t clear, but one assumes we’re talking about grades. Courses are recommended not based on student curiosity or interest, but the likelihood of completion.

If a student wanders “off-track” they may even be forced into changing majors.

The software is obviously well-intentioned. An education is costly, and students who graduate without degrees, but with debt, consign themselves to lives of penury.

But when people say that these pathways are “personalized,” they gloss over the ways in which they’re “personalized,” which is by making use of socioeconomic and demographic data in order to draw conclusions about the individual. This is the “science” of predictive analytics. Depending on one’s race or gender or any number of other factors entirely outside of the individual’s control, different students finding themselves in similar academic standing may be shown different paths.

As of yet, as far as I can tell most of this software is used in conjunction with human advisors, an analytical aid, if you will. There is still an element of human discernment as part of the process. I’m assuming that if a student deeply desires taking a shot at a particular field of study, an institution of higher education does not forbid it because the software finds their chances at matriculation unpromising.

But how long until the algorithm takes precedence? What happens when too much human judgment overrides the software and degree completion rates drop?

What happens when rate of degree completion is tied to Federal dollars, as may be coming down the pipeline via the Obama Administration’s plan for rating colleges?

We should maybe not be surprised that a systematically unjust judicial and corrections system moves towards policies and practices that dehumanize, but I am brought short by the realization that institutions of higher education have embraced the same ethos.

This is not stereotype threat, but something further reaching, “stereotype reinforcement and realization.”

The inevitable result of “data-driven” advising absent human judgment is the same as “data-driven” sentencing. We will permanently bake systemic inequities into the cake.

If we are going to continue to use these tools – and I sincerely hope we don’t because I do not trust them in our hands – we should at the very least practice a significantly greater degree of transparency. If Course Signals flags a student, they should know what factors play into this algorithmic judgment. If eAdvisor recommends a degree path, students should know exactly why.

Efficiency is not an educational value. It is not tied to any human emotional experience of the world, and yet again and again it seems to take precedence in our debates about the future of higher education.

If universities aren’t going to practice freedom and offer space for young people to explore their potentials, who is?

--

I can't wait for school to start. Without it, I'm spending too much time on Twitter, which is simultaneously enlightening and anger-making.

 

 



[1] In her op-ed Professor Starr says the legal challenges have been slow to come because the “risk prediction” instruments lack transparency, and in some cases are shielded from discovery because they are corporate proprietary products.

 

Next Story

Written By