You have /5 articles left.
Sign up for a free account or log in.

To the editors:

In their Feb. 3 essay, members of the National Council for Online Education argue that online courses—properly done—are at least as good as in-person courses. As evidence, they link not to a study or meta-analysis, but to a database of papers, which is somewhat akin to my making a medical claim followed by a link to PubMed, except in this case the database was specifically designed to be biased. It’s literally named the “No Significant Difference database,” and its belated claim to solicit studies that do show a significant difference seems a little disingenuous.

It currently holds 141 studies showing no significant difference, 51 showing online better—and 0 showing classroom better and 0 showing mixed results. Using a typical p<0.05 significance level, we’d expect a fair database to show those latter numbers to be nonzero just by random noise, even if there were indeed no true difference.

But I think the real issue that has hit proponents of online courses in the past couple years is that, for the first time on a large scale, use of online courses was randomized (often by university or state). Many institutions have taught both online and face-to-face classes for years, but few have forced students into online courses. So students studying online was self selected, which violates the first rule of testing efficacy of something—randomizing your sample. At my own university, a number of students in my face-to-face classes had tried online and not liked it, and had specifically chosen in-person classes. It’s little wonder that such students were unhappy or underperforming when forced back online.

It’s certainly true that there’s a real difference between courses carefully planned to be online and courses abruptly forced to be remote. What’s telling to me, though, is reports that the courses least popular with our suddenly online students were those courses that had been online all along. Professors who’d taught online for many years were surprised that their best-practices asynchronous online courses were suddenly attracting lots of complaints in a way the Zoom-my-lecture-classroom-simulacrum courses weren’t. We know learning gains and student satisfaction aren’t perfectly correlated, but this does highlight the self-selection issue.

In April 2020, it was fair to say many of the “online” courses weren’t well designed. However, it’s rather bizarre to claim this in February 2022. If nearly two years of experience and training in how to design online courses, including universities making them all go through Quality Matters, doesn’t result in acceptable online courses, are we setting an impossible standard?

I think we all understand that the future will hold a mixture of in person and online courses, likely with more online than before because of the flexibility it provides. It works well for some students, and is necessary to serve those will full-time jobs. Many professors who previously said they’d never teach online now see it as a realistic possibility.

What I’d like to see is proponents of online courses honestly confronting the fact that the format doesn’t work well for some students and for some courses. And I’d like them to throw out every study that didn’t randomize the assignment of modality.

--David Syphers

Next Story

Found In