You have /5 articles left.
Sign up for a free account or log in.

Confronted with a thousand learners in a massive open online course, would you be able to give each your personal attention? It seems a far greater challenge than trying to connect with dozens of dazed students in a giant lecture hall on campus.

Armed with data analytics, virtual courses offer many extraordinary features -- chief among them is that they are capable not only of mass communication but, counterintuitively, mass personalization as well. Exploiting these seemingly contradictory qualities, MOOCs not only offer unparalleled global reach and access, but they are equally capable of whispering into each learner’s ear electronically. MOOCs have excelled spectacularly in attracting millions to log on, but once students sign up, MOOCs have been less successful in communicating directly to each learner.

To help faculty scale individual responses to massive numbers of MOOC learners, computer scientists are giving it a shot. In a test of student coding skill in a machine-learning MOOC, the Next Gen team at Coursera, the biggest MOOC provider, first succeeded in teasing out some 20 to 40 coding errors learners commonly make. Students in an experimental section of the course no longer answer multiple-choice questions, as most MOOC learners do, but must now write computer code instead. Immediately after viewing a video lecture, students open a browser, inviting them to build a piece of software, demonstrating mastery -- did they actually understand the lesson or did the video just float right by, as if they were nodding off during a TV commercial?

If their submission reveals they’ve made a common conceptual coding mistake, a pop-up window appears with a clue, suggesting why they may have made the error -- “Like a friend looking over your shoulder, giving immediate feedback associated with your mistake,” said Coursera data scientist Zhenghao Chen, a member of the Next Gen team that devised the company’s new error-feedback loop. “Students should have a clear idea why they failed,” Chen said. “Feedback prompts them to correct their misconceptions, to think along different paths.”

In the experiment, Coursera scientists are turning education on its head, asking learners to dig into their failures to appreciate why they are mistaken. Failure is rarely exploited to illuminate why learners go astray. Students are commonly measured by how well they master material, not by how they triumph after struggling to correct their mistakes. John Dewey, American pragmatist and champion of learning by doing -- the forerunner of active learning -- once remarked, “Failure is instructive. The person who really thinks learns quite as much from his failures as from his successes.”

Chen and his Next Gen team -- Ruty Rinott, Andy Nguyen, Amory Schlender and Jiquan Ngiam -- claim that motivation and perseverance, attributes honed by many in video games, are what learning is often about. In my new book, Going Online, I observed, “Students soon discover that learning is a gradual, often stumbling process that can lead down blind alleys, often hobbled by false starts. Marked by ruptures and dislocations, learning is a risk-taking exercise, not an elegant performance.”

Unlike most MOOC learners, those in the Coursera pilot machine-learning class are now required to perform real-time operations on the fly, as if they were playing a video game, actively engaged in their own knowledge discovery.

“Active learning is much more effective than just receiving lectures passively,” Chen said. “It helps with retention and avoids misconceptions learners stumble over when they don’t receive automated feedback.”

Chen says that his team’s approach increases persistence. Instead of dropping out in frustration, students who get positive feedback eventually succeed in solving problems.

“Learners are not asked to respond with a binary yes-or-no solution, but are asked to apply content from the lecture, reinforcing what they learned,” Chen said. “The method encourages learners to gain access to their own mastery, rather than being confused by what they believe they understand.”

The great Swiss clinical psychologist Jean Piaget long ago theorized that our thoughts are structured by schema, a framework of preconceived ideas about how the world works. Mired by deeply held mind-sets, no matter how many times learners go back to a video lecture or a textbook, they still may get stuck, failing, exasperatingly, to understand essential concepts.

A famous riddle illustrates conceptual blindness. A father and his son are in a car accident in which the father dies instantly. The son survives, requires surgery and is taken to a local hospital. Soon a surgeon enters the operating room and says, "I’m afraid I cannot operate on the boy."

"Why not?" a nurse wonders.

"Because he's my son," the doctor responds.

In a number of experiments, most people were so stumped by the puzzle they could not unpack their resistance to the fact that mothers, too, might be surgeons. Similar blindness prevents students from recognizing their own misconceptions.

An early version of the MOOC now being tested by Coursera was first launched in 2011 at Stanford University. To everyone’s astonishment, an eye-popping 100,000 students enrolled. It featured quizzes and graded programming assignments, ultimately emerging as one of the top MOOCs ever -- over time, attracting a stunning 1.2 million learners.

The instructor of the legendary MOOC is Andrew Ng, former chief scientist at Baidu, the $12 billion Chinese web-service firm, often called the Chinese Google, one of the largest internet companies in the world. In 2012, Ng co-founded Coursera and is now chairman.

Admittedly, my enthusiasm for the Next Gen active-learning pilot may be premature. So let’s wait until results are in.

Courting Big Data

Coursera is not alone in exploiting big data and other advanced techniques. For example, Sense, a New York-based tech start-up with R&D labs in Tel Aviv, is testing pattern recognition and semantic analysis methods that automatically bundle student answers that exhibit common solutions. In a MOOC, with dozens or even thousands of students, automatic batching allows faculty to pinpoint responses to learners who give fairly similar answers, personalizing faculty-student interaction at scale.

Just as in the Coursera approach, instructors can deliver feedback to resulting clusters. In contrast, however, Sense does not require faculty or subject-matter experts to feed the system with examples of, say, common errors, as in the Coursera experiment. With Sense, instructors may feed 50 or more new quiz solutions at any time; the system then automatically reveals common patterns -- successful responses, common mistakes, even novel solutions -- that are shared among submissions. In addition to text, the Sense system, like Coursera, can interpret computer code, but also mathematical equations.

In choosing one of Coursera’s most prized MOOCs, the Next Gen team is departing from just delivering conventional video-streamed lectures -- an exhausted pedagogy that has long since outlived its sell-by date. Instead, in their radical experiment, they are embracing a far more innovative active learning style -- including digital interactive modules, like the computer code exercise -- certain to lift students off their binge-watching couches, challenging them to face their screens to act and not glaze over.

With millions enrolled in MOOCs, it’s an unprecedented opportunity to demonstrate the superiority of learning by doing, a pedagogy not commonly practiced on campus, either. In addition to offering serious scholarship from some of the best minds in the world, it’s time for MOOCs to act responsibly and follow the Next Gen team’s lead. Failure to do the right thing is no success at all.

Next Story

Written By

More from Views