You have /5 articles left.
Sign up for a free account or log in.

Not long ago, San Francisco investor and entrepreneur John Greathouse penned an op-ed in The Wall Street Journal claiming he found a solution to the tech industry’s diversity problem. Because of rampant bias in the tech industry, Greathouse suggested female job candidates should “create an online presence that obscures their gender” in order to improve their employment prospects.

The response was swift and vicious. Concealing one’s gender in response to bias addresses the symptom rather than the disease (biased hiring managers/employers and biased hiring practices). Greathouse, critics contend, offered a “Band-aid”: a superficial and ephemeral solution that avoids dealing with a deep-seated systemic challenge.

The temptation to optimize the path that people take through dysfunctional systems isn’t, of course, limited to hiring practices. It is a familiar pattern in a higher education discourse obsessed with predictive analytics -- one that all too often avoids tough conversations about poor instruction and outdated pedagogy.

This temptation to fix people rather than dysfunctional systems reminded me of current conversations in education technology around how new technologies can improve student success. Specifically, the interplay between two powerful new approaches: predictive analytics and adaptive learning technologies.

Using predictive analytics as an early warning system to predict which student is likely to fail is becoming commonplace. The goal is as clear as it is noble: reduce the number of college dropouts by intervening early.

The New America Foundation recently published “The Promise and Peril of Predictive Analytics in Higher Education,” a report detailing ethical concerns involved in using data to make predictions and its impact on underrepresented students. (I served on the advisory board for the project.) Yet the report overlooks the fact that, despite well-intentioned efforts, early warning systems put the responsibility to change on the student when what those of us whose job is to improve student success -- educators, administrators and policy makers -- really must do is change the system.

To illustrate, consider this example: in 2007, my colleague Ganga Prusty, a professor at the University of New South Wales, Australia, inherited a course in first-year engineering mechanics that had a 31 percent failure rate. The high-enrollment, introductory-level course teaches students concepts and techniques to solve real-world engineering problems. Success in engineering mechanics is a prerequisite for most engineering-related majors. The high failure rate meant that nearly a third of students couldn’t live up to their dreams of becoming engineers. And this, mind you, in an economy that’s starved for STEM graduates!

At the time, I was doing my Ph.D. building something I called the adaptive e-learning platform -- years later it would become the technology behind Smart Sparrow, the company I founded -- trying to find ways to create digital learning experiences that are more than PDFs and PowerPoints. I was introduced to Prusty because our dean thought it would be useful to try to apply this new technology to real-world problems. I found myself for the first time trying to find new solutions for what is essentially a very old problem: student success.

Yes, Prusty could have intervened with at-risk students and advised them to consider another major, but is that what he should have done? Should he not instead have discovered why the course was failing one in three students, and tried to fix it?

Prusty and his team did the latter and started by identifying “threshold concepts,” a term Jan H. F. Meyer and Ray Land introduced in 2003 that refers to core concepts that, once understood, transform perception of a given subject. After identifying the course’s threshold concepts, Prusty and his team designed adaptive tutorials to teach engineering students what they needed to know.

Prusty’s adaptive tutorials are a form of smart digital homework. They take students about an hour or two to complete as they work on solving problems with interactive simulations and receive feedback that is based on what they do.

For example, students learn how to analyze the mechanical forces that act on beams of a bridge by designing a bridge and driving simulated cars on it, measuring in the environment whether the forces they calculated were accurate. The system is “intelligent” because it can provide feedback based on the specific mistakes the student makes (called “adaptive feedback”). If the tutorial detects that a student would benefit from more examples or content, it dynamically changes the activity to show that content (called “adaptive pathways”).

Prusty and his team designed four adaptive tutorials in all, delivered weekly to students and targeting the threshold concepts and common misconceptions students had.

It worked. Not only did students begin to enjoy doing homework -- an achievement in its own right -- but they also performed better in the course’s assessments. Prusty’s team did not stop there, however. They analyzed the way students learned using these adaptive tutorials, noticing what worked and what didn’t, and then improved the tutorials. Over time, Prusty’s team built and introduced eight more adaptive tutorials.

The result? After a few years, the failure rate dropped to 5 percent. That happened while using the same course, syllabus and final exam, and while growing the number of students by 70 percent. The only difference was the number and the quality of adaptive tutorials used.

Prusty replicated the process in another course (a more advanced course in mechanics of solids), and the failure rate dropped from 25 percent to 5 percent.

Now let’s imagine that instead, we could have used predictive analytics to identify failing students. What would we have done? We probably would have found a clever way to identify students likely to fail the course and gently suggested alternative degree programs. But would that have been the ethical thing to do?

Put another way, if you have a course with a high failure rate, should you use technology to predict who’s going to fail and alert them? Or should you fix the course? The former will improve your institution’s graduation rates, and the latter will have you try to convince your faculty to address the issue.

Which one is easier? Which one is more ethical? What happens when student success and institutional outcomes conflict?

It is all too easy to design Band-aid solutions to higher education’s completion crisis while ignoring more complex problems -- such as courses that are simply not good enough when we have an opportunity to redesign them entirely. Predictive analytics and adaptive learning are two sides of the same coin. But we will fall short at true improvement if we stop at analytics.

Next Story

Written By

More from Views