You have /5 articles left.
Sign up for a free account or log in.

Istockphoto.com/martinwimmer

The quest to figure out just how much the COVID-19 pandemic affected college-level learning is understandable, not least so that colleges and universities can address any potential setbacks students have suffered as many hope to return to more "normal" learning environments this fall.

With that goal in mind, more researchers will probably try to follow the lead of economists at Auburn University, the University of Southern Mississippi and American University, who published a working paper through the National Bureau of Economic Research this week, in which they use a large-scale data set from one public research university to compare how studying in person and online affected students' course completion rates and grades before and after the pandemic.

They find that when accounting for certain differences in student and instructor traits, students in face-to-face courses "perform better than their online counterparts with respect to their grades, the propensity to withdraw from the course, and the likelihood of receiving a passing grade." The researchers say their findings hold steady both before and after the pandemic descended in spring 2020.

That leads them to title their paper "Is Online Education Working?" and, by and large, their answer is no.

Shanna Smith Jaggars, assistant vice provost of research and program assessment in the Office of Student Academic Success at Ohio State University, described the paper as that rare "rigorous" study of online learning with a large sample size, making it a "welcome addition" to the literature. She said the paper's findings that students with weaker academic backgrounds struggle more in virtual courses, and that grades were inflated in the mostly remote spring 2020 semester, were persuasive.

But several experts who study learning in multiple modalities say the study has methodological flaws and greatly overreaches in its conclusions, which they attribute to the researchers' lack of knowledge about, and possible bias against, online education.

They're particularly troubled by the sections of the study's findings related to the pandemic, which "do not acknowledge that this occurs during a pandemic, and these are not 'normal' online courses," said Jeff Seaman, director of Bay View Analytics and one of the foremost researchers on educational technology.

The Research

Duha T. Altindag, an associate professor of economics at Auburn and the study's lead author, said the onset of COVID-19 motivated the researchers to revisit the long-standing debate about the efficacy of online versus in-person education, given predictions that the industry's broad (if temporary) pivot would lead to wider embrace of virtual learning in the future. (The co-authors are Elif Filiz, assistant professor of economics at the University of Southern Mississippi, and Erdal Tekin, a research associate at American University.)

"Given this prospect," the authors write, "it is all the more important to have a complete understanding of the impact of online instruction on student learning in general and during the COVID-19 pandemic in particular."

To try to provide that understanding, the authors compare data on the performance of about 18,000 students in fully in-person versus fully online courses at an unnamed "medium-sized, public [Research 1] university" from the spring 2019, fall 2019 and fall 2020 semesters. Their measures of student "learning outcomes" (course completion and grades) don't reflect what, if anything, students learned, but that's a broad limitation in higher education.

Looking just at the performance of students in the pre-pandemic semesters (spring and fall 2019), the researchers found no substantive differences in completion rates, but they report that students in face-to-face courses were between five and seven percentage points less likely to earn a high grade (A or B) than were their peers in online courses. That gap in final grades between online and face-to-face students narrowed in spring 2020, though, as even the "face-to-face" students ended up in "emergency remote" courses.

That's unsurprising, the authors note, given that many institutions and individual instructors -- recognizing the damaging impact the pandemic had on student health, mental health and other matters -- adopted more flexible policies on such things as grading, assignments and attendance.

The researchers surmised that the apparent advantage for online students over their face-to-face peers might relate to certain external factors, "such as grade inflation caused by lenient grading by instructors teaching online courses or more widespread violation of academic integrity in these courses." So they applied a series of filters designed to account for heterogeneity -- "student and instructor fixed effects" -- to the data.

One prominent effect related to the grading policies of instructors. When they examine differences in those policies, the researchers report, they find the results flip: "students in face-to-face courses in fall 2019 were 2.4 percentage points less likely to withdraw from their course and 4.1 percentage points likelier to earn a passing grade." (Other results related to grading were statistically insignificant.) That leads them to conclude that "instructors teaching online courses may be more lenient in their approach to grading than instructors teaching F2F courses."

The authors apply a similar filter to see if apparently higher grades for students in online courses might be caused by "students engaging in academic integrity violations if less strict monitoring by instructors leads to more cheating." To try to ferret that out, they use information from the university in question's online proctoring service, which also leads them to conclude that students in courses (face-to-face and online alike) in which instructors use the online proctoring service for exams earn lower grades than do those in which the instructors do not proctor exams.

Lastly, to try to gauge how students' academic "quality" affects their performance in online versus face-to-face courses, the researchers compare students from the university's honors program to its other students. They find that honors students perform equivalently no matter the instructional delivery mode, whereas non-honors students perform better face-to-face.

Response From the Field

A majority of the scores if not hundreds of studies examining the comparative performance of online versus face-to-face learning have found "no significant difference" in student outcomes. But the topic remains contested enough that any well-designed piece of research (or even some flawed ones) will spur discussion and debate.

The NBER study is worth paying attention to, Jaggars of Ohio State says, because "there are only a handful of rigorous studies of online learning with large sample sizes," and this is one. Jaggars says the study largely reinforces previous findings that more academically qualified students fare better in online courses than do their less prepared peers, and she describes the assertion about student cheating possibly driving grade inflation as "interesting" but unreliable because the authors had limited data to draw on.

Others who reviewed the study had much harsher assessments.

Seaman, of Bay View Analytics, said the parts of the study related to what happened during the pandemic should be "totally ignored," because the researchers -- in comparing data from that period to what happened before it -- failed to account for the enormous differences.

"We know from multiple other research (ours included) that most faculty who taught online during this period had never done so previously and had to move online without time to plan," Seaman said via email. "We also know that they said that they were under considerable stress because of this. The number one concern for institutions at this time was student stress. Yet, the discussion reads as if these are normal times, and the conclusions could be applied in general."

Deb Adair, president of Quality Matters, a nonprofit group that focuses on improving and ensuring quality in online education, said the paper places significant emphasis on the "instructor and student fixed effects," yet "they are neither operationalized nor fully discussed," leaving readers in the dark about them.

Adair also said the researchers make assumptions that reveal either ignorance or bias.

For instance, in comparing the approaches instructors used in in-person and face-to-face courses (including instructors who taught in both modalities), "their assumption … is that an instructor, or a student, would approach teaching and learning the same way regardless of whether the course is F2F or online," and that any difference in outcomes "must be due to the modality," Adair said via email. "What is critically missing is the strong impact of online course design and instructor training in online learning (or lack of these things) that could account for the differences … There are other things, of course, that could account for differences between an instructor's student outcomes in F2F vs. online, especially the way the institution does, or does not support online education."

And attributing differences in grades in online and in-person courses to "instructor leniency and academic integrity lapses in the online course" fails to account for purposeful, and often sound, differences in the two modalities. "Well-designed online courses may have entirely different types of assessment than the face-to-face counterpart -- substituting midterm/final exams with authentic assessment," Adair said. "I should also point out that well-designed courses using authentic assessment instead of midterm and final tests will not need to have remote proctoring. Cheating, in this sense, is a nonissue. Instead, the authors assume the lack of remote proctoring is because instructors are being less vigilant about cheating and not because they are using better research-informed assessment practices."

Asked to address some of the critiques of the paper, Altindag, the Auburn economist, conceded that data drawn from the spring 2020 pivot to remote learning should be looked at differently from the other data in the paper.

"But this isn't about COVID, this is about whether online education over all works," he said. "I'm confident that if we eliminated the spring data, we would obtain the same conclusion. If you eliminated that COVID year, I wouldn't change anything about the text."

Next Story

Written By

More from Tech & Innovation