You have /5 articles left.
Sign up for a free account or log in.

Blackboard is planning to introduce a new feature in its learning management system later this year to help instructors grade students’ participation in class discussions online. The feature, called the “discussion forum recommended grade,” will use computer algorithms to analyze students’ posts in class discussion forums.

John Whitmer, learning analytics and research director at Blackboard, said that instructors want to use discussion forums to judge students’ participation, but that doing so is time-consuming and difficult since the forums were designed for discussion rather than assessment. These discussion forums are often underused by students, said Whitmer, since there is often little incentive for students to engage.

By using this new Blackboard tool, instructors will be able to quickly see which students are participating online, said Whitmer. He stressed the tool is not an “autograder” but a “grading assistant” that will relieve instructors “of many tasks of evaluating the quantity of participation, so that they can focus their assessment on the deeper value and meaning in student work.”

The tool combines several assessment techniques, said Whitmer. These include the Flesch-Kincaid readability index, which measures the complexity of the students’ language by counting the number of syllables and number of words per sentence. The type-token ratio, which looks at how many times words are repeated, is also used. In addition, a “critical-thinking coefficient,” which “classifies words according to the degree of critical thinking represented” is used, as is a standard word count, said Whitmer. A weighted formula is then used to determine a grade, which faculty must review and approve before submitting.

The tool, due to be rolled out later this year, will be a core feature of Blackboard Learn Ultra, said Whitmer. It will be available to Ultra users at no additional fee. “Blackboard views these types of ‘embedded analytics’ (which we already provide in other assessment workflows) as a key part of the Blackboard Learn Ultra course experience,” said Whitmer.

The feature, still in the development phase, has not yet been tested, but a version of this feature is available within X-Ray Learning Analytics for Moodlerooms, the open-source-based LMS, said Whitmer. “Faculty have been very excited and interested in the tool,” said Whitmer, who added that the feature is “one tool in their toolbox to help determine grades.”

In interviews with faculty members who have spent time thinking about teaching writing, grading and technology, the reaction to Blackboard's proposal was mixed.

Patricia James, program consultant at the California Community Colleges’ Online Education Initiative, said that she thought the Blackboard feature was one that a lot of instructors would like to have, but she worried that it could discourage instructors from interacting with students in class discussion forums. She also suggested that Blackboard could redesign its grading platform to make it easier to view student comments, making the process less laborious. James noted she found it much easier to review student comments in Canvas than in Blackboard, as it allows instructors to see all of a student's comments in one screen and easily check responses in the context of the forum.

Kyle Bowen, director of education technology services at Penn State University, agreed that the assessment tool could be useful to some instructors, but not to all equally. “It’ll come down to that individual teacher’s approach to evaluating student writing,” he said. “If the number of words being used, or the readability of the writing, are essential pieces of assessment, then yes, having that information would be helpful. But if those aren’t critical factors then it will be less useful.”

Bowen said while most of the assessments mentioned by Whitmer sound like straightforward measurements, he would like to know more about how the critical-thinking coefficient is calculated, as well as the exact weighting of the formula used. While the technology is promising, Bowen said it he would prefer to see it being used to identify issues being discussed in class forums, which could then inform instructors’ teaching. Bowen also questioned how the tool would tell the difference between operational comments like “will this be on the final?” and posts that actually demonstrate students’ understanding of material.

Carolyn Penstein Rosé, a professor at the Language Technologies Institute and the Human-Computer Interaction Institute at Carnegie Mellon University, said that she also had some reservations about Blackboard’s proposed feature.

One of the aims of the feature is to encourage more students to participate in class discussions online, but Rosé said that Blackboard’s approach “could potentially result in students gaming the system, rather than meaningfully engaging with their fellow students.”

Gaming the system has shown to be a problem with standardized writing assessment in the past, as was demonstrated by former MIT professor Les Perelman, who coached his students to write nonsensical but high-scoring SAT essays. Perelman, an outspoken critic of automated writing assessment, told Inside Higher Ed he thought the Blackboard tool “sounds like a Rube Goldberg machine” -- a fantastical and deliberately complex device designed to solve a simple problem.

Though Whitmer says that Blackboard will not be telling students how exactly their contributions will be assessed, Rosé said that students could quickly figure it out. The best way to motivate students to take part in class discussion forums, said Rosé, is to integrate them into the instructional activities of the course. For example, making online discussion a necessary part of working on an assignment, or highlighting good posts in lectures. Perelman agreed, saying he did not feel Blackboard’s approach would encourage genuine participation from students.

But Whitmer disagreed that gaming would be an issue. “Given the use of multiple factors in our model, it would be difficult for students to manipulate scoring in ways that weren’t obvious in faculty reviews of student forum grades, which, as mentioned before, are required before any grade is posted,” said Whitmer.

Rosé, who has conducted research into how discussion can be meaningfully assessed using automated techniques, said that she did not have confidence that the assessments mentioned by Whitmer were appropriate for determining the quality of students’ responses. Perelman agreed, saying, “The specific metrics that they’re using seem antithetical to the kind of writing that most teachers want students to do in discussion pages online. What teachers usually want people to do is to respond to other people’s ideas and to have arguments in an informal way. What this does is encourage pretentious prose.” Asked whether this kind of technology could ever meaningfully assess student writing, Perelman said that computers would first need to pass the Turing test -- a feat we are still “far away from,” he said.

Rosé took a more generous view, however, suggesting that automated assessment of class discussions could be done successfully if the assessment were tailored to the learning objectives of each course, but a one-size-fits-all approach “seems misguided and potentially dangerous,” she said.

Jesse Stommel, executive director of the division of teaching and learning technologies at the University of Mary Washington, and founding director of the Hybrid Pedagogy journal, said that he found Blackboard’s proposal to evaluate student writing in this way to be alarming. “There is certainly space for technology to help us create dialogue in an online class, but using a technology to assess the success of a discussion, ultimately it reduces student engagement to a rote series of behaviors. ‘Write a comment of 60 words, citing two sources, responding to at least one of your classmates’ -- those kinds of behaviors do not make a discussion successful. They’re arbitrary markers.”

Whitmer stressed however that the tool is not meant to be comprehensive. “Our focus is to provide a tool to assist the human leading the class, not replace them,” he said. “Of course, there is some risk with any automation, but we believe that the benefits of increased feedback outweigh these risks.”

Next Story

Found In

More from Teaching