You have /5 articles left.
Sign up for a free account or log in.

Courtesy of Athabasca University Press

Does anyone really know the definition of a "good" assessment? Does such a thing even exist?

The second question has no clear answer (which means it's not likely to show up on an assessment any time soon). As for the first, the jury's probably still out, but Dianne Conrad and Jason Openo are closer than most. Openo serves as director of the Centre for Innovation and Teaching Excellence at Medicine Hat College in Alberta, Canada, and Dianne Conrad teaches in Athabasca University's Center for Distance Education. Conrad has long wanted to write on this topic, and in early 2016 she convinced Openo, then a doctoral student with similar perspectives on assessment, to join her. The final product is Assessment Strategies for Online Learning (Athabasca University Press), which came out this summer.

"Inside Digital Learning" asked Openo and Conrad to share their thoughts on the state of assessment and the potential for its evolution. Their comments have been lightly edited for length and clarity.

Q: Did you find in the process of researching the book any particularly notable or innovative examples of online assessment?

Conrad: In short, no, although [other academics quoted] in the Appendix shared ideas and techniques that we both use and support. In fact, in Chapter 7 on authentic assessment, we searched out examples of assessments from online courses and demonstrated how they could be better structured to reflect authenticity and use, to [take] advantage [of] the affordances of online. Online offers opportunities to introduce exciting assessment potential, and that is often not taken up by instructors who are not well versed in how to manage online teaching.

Openo: I disagree with Dianne a little bit here, but the crux of the answer rests on how you define notable or innovative. I think our reflections from the field give a good indication of conscious alignment between instructional beliefs and instructional practice. Ellen Rose talks about her challenges with online discussions and her introduction of self-assessment into the process. Terry Anderson talks about using voice marking in PDFs to provide more personalized and humane feedback to students. This can also be achieved with the Marco Polo Walkie Talkie app that I am planning to use in my next course. Archie Zariski [at Athabasca] is using a technological tool, Audacity, to support oral assignments, whereas Beth Perry [at Athabasca] is using a pedagogical strategy, negotiated assessment, to work one on one with her students.

Getting students involved in the creation of open educational resources, eportfolios and using social networks for educational purposes are all notable and innovative. But the most innovative part is the intentional, creative way that instructors are using technology to support their pedagogical beliefs. And what I hope I see in this collection of reflections from the field, and the conversations I have had since with faculty members, is a true paradigm shift in assessment practice to more authentic assessments that ask students to bring their whole selves to work on relevant problems.

Q: What are the most pressing concerns you hear from faculty members regarding assessment?

Openo: The most pressing concern I hear from faculty is the need for professional development in the area of assessment. Behind this is a pressing concern for quality, equity and fairness. Assessment has consistently been one of the most important needs identified in several faculty development surveys, and despite the complexity involved in designing and conducting research, faculty development has a cumulative impact on teaching, including improved assignments. Dianne suggests that if an instructor has a good grasp of teaching techniques, assessment shouldn’t be difficult. But technological, pedagogical content knowledge represents a pretty phenomenal and underappreciated skill set.

Here it’s necessary to expand this discussion to include contingent faculty. For many, online education represents a cost-cutting tool and a way to undercut existing faculty. As Adrianna Kezar has written, faculty development can go a long way to eliminating the worst aspects of the growing adjunct situation, but it cannot resolve the long-term structural issues, and this needs to be important to higher education administrators truly concerned about online educational quality. In addition to the human costs involved when working in a state of precarity, there are costs to educational quality as well.

In a large study at Washington State of students who took online courses, all student groups showed learning gaps, and when Mueller, Mandernach and Sanderson in 2013 looked at different persistence rates between students learning online taught by full-time and part-time instructors, they found that students learning from full-time instructors were more likely to complete the course. They also suggest that adjunct faculty may grade more leniently due to perceptions of job insecurity. Part-time faculty not engaging in learner-centered assessment practices or grading more leniently out of concern for their jobs is supported by other researchers, and the issues faced by contingent faculty may be exacerbated by online instructors. If part-time adjuncts are indispensable but invisible, then online contingent faculty are doubly invisible because they work away from the brick-and-mortar institution.

Faculty development appears to be a key component in successful online education; good assessment practice has to be a big part of that faculty development curriculum, and providing those faculty development opportunities must account for the unique working conditions online faculty may face. A lot more research is needed in this area.

Conrad: In my current role, I don’t hear much about this from faculty. As a matter of fact, I have not heard faculty ever talk much about assessment in all my years of teaching!

Q. What technology tools could be most helpful for online faculty members looking for a different approach to assessment?

Conrad: Assuming a web platform, such as Moodle, the field is wide-open to various technologies, but what must come first is a sense of how the technology will enhance the pedagogy being used. So, pedagogy first; technology second. In our field bells and whistles dominated in the early years. We understand now, better than before, that indeed technologies are tools to be applied mindfully and appropriately. That said, teachers looking for ways to integrate technology via social media into assignments and assessment have almost unlimited choices: Facebook, Twitter, wikis and blogs can support group meetings among learners and text. There are so many new animation programs that learners can draw on (literally): GoAnimate, Powtoon, Prezi … I learn more every time I assign a media-based project. The issue here for teachers is to get away from the notion of assessment as a Word document. Introduce project-based assessment that requires media. Let the learners choose their medium (or provide a range of choices). My experience, at the graduate level, is that they love this. The creativity manifested is astounding. But don’t lose sight of the academic nature of the assignment.

Openo: There is a saying that the best tool is the one that people will use. Don’t let the learning management system be a walled garden that constrains you. Assemble a range of sites, apps and networks that are interoperable or compatible with the LMS to achieve specific pedagogical purposes that encourage agency and expression. In online contexts, engagement is absolutely essential, and these tools need to create deeper and more meaningful human interaction. One of the interesting uses of Twitter, exemplified by Real Tweets From WWII, is how to use Twitter to teach history and creative writing. There are a panoply of applications, tools and communities that can extend the digital learning environment, and all of these can become evidence for assessment. Dianne’s point about creativity is crucial. Student products are the greatest proof of educational quality, and the level of creativity allowed by technology is something instructors consistently talk about.

More Q&As From "Inside Digital Learning"

Two digital learning devotees evaluate their progress.

Techniques for creating an online doctoral program.

Gameful design could be the next big innovation.

What virtual reality can and should do.

How to think about online teaching and design.

A bird's-eye view of online program quality.

Q: What conclusions did you draw about the most effective techniques for assessing students’ learning?

Openo: Most of my favorite lines from the book are ones that Dianne wrote, including the line that positivist approaches to education that objectified learning are giving way to constructivist views. We did not draw conclusions about the “most effective” techniques. We have advocated for a greater embrace for pedagogies of engagement, and we have covered a lot of territory for how to think about that and accomplish it.

Here’s my big conclusion -- assessment sucks. This was well expressed in another of my favorite rants on the subject of assessment. I don’t actually think it’s much of a secret anymore that most instructors would describe assessing their students as a necessary evil. There has to be a better way, and that was the motivation for writing the book. How can we make this process more meaningful to our students and to us as instructors? How can we fully achieve the potential online learning represents? In online learning, assessment plays a key role in moving from low-end elearning to high-end elearning (Duus, 2009). Low-end elearning is characterized by content transfer, where internet-based communications technology is used for the transfer of knowledge -- or mode of delivery, as in pizza delivery by drone. High-end elearning requires the creation of new knowledge through the creative use of technology -- or delivery as liberation. Internet-based communications technology are best employed when they maximize student engagement in knowledge creation and liberate them from time, distance and assessment constraints. Maybe not most effective, but hopefully more effective (or at least more engaging).

Q: What are the end goals of better assessments? How can faculty members harness the results from more effective assessments to improve their teaching?

Conrad: The end goal of better assessment is better learning. Using the assessment tool as a learning tool, rather than a “jump-through-the-hoop” activity or a measurement exercise, can enhance the learning experience. Growth and learning can occur through the assessment activity -- rather than it just serving as a regurgitation or rehash of already learned material. In my master’s program at a major research university, I had only one exam to write. The class objected strenuously, but the prof was set in his ways. So we sat down to write a three-hour totally regurgitative exam. I was so exhausted from writing booklets and booklets of material that I eventually gave up and left. Apparently, I had “thrown up” enough on the pages, because I did well in the course. This activity was not productive.

Openo: The end goal of better assessments is better learning, and better learning about teaching in online contexts. Faculty members can harness the results from experimental assessments through discipline-based education research, the Scholarship of Teaching and Learning (SoTL) and the Scholarship of Technology-Enhanced Learning (SoTEL). Rightly or wrongly, there is increased emphasis on evidence-based teaching practice. By turning toward these forms of scholarship, one turns their teaching into “community property,” as Shulman once said. There is so much we still don’t know about the complexities of teaching and learning with technology, optimal course designs, and it’s a rapidly evolving field.

What I find most interesting about the impact of teaching online is that we believe teaching online is causing a “pedagogical renaissance” (we won’t say revolution). The proof of this is that seven in 10 faculty members who have taught an online course say the experience helped them improve their teaching both online and in the classroom. Teaching online is deepening our understanding of teaching in general, and we can harness this through any number of research approaches, like action research, design-based research and course-based analytics.

Q: Why do you think assessment is one of the most challenging aspects of the academic experiences to navigate in an online context?

Conrad: Assessment is always one of the most challenging aspects of academia regardless of delivery mode. It’s often the “tack-on” in the course design, especially if the course design is being executed by a content expert with no educational training (which often happens). Once we move into the online context, what we often see is a holus-bolus import of the face-to-face version of the course to the online medium. This, as a basic premise, is a bad idea for many reasons, but the assessment piece will also suffer because (regardless of its initial quality) it has not taken advantage of the opportunity to become something better.

Online assessment, assuming the teacher’s firm grasp of good online teaching techniques, should not be difficult at all; online media offer so many opportunities to create interactive, authentic assessments that engage learners in the continuation of their course’s learning journey rather than presenting the assignment(s) as barriers or hoops to moving forward in the course or completing the course.

The book offers suggestions as to the creation of authentic online assessments, not in a “how-to, here’s-a-list” manner but in a solid pedagogical manner: “here’s what learning is, here is how authentic assessment fits in.”

Openo: I largely agree with Dianne, but I’d like to add a couple observations. If we transport all of the traditional challenges involved in assessment to an online setting, it is only going to make chronic challenges more difficult. We see this with online testing and lock-down browsers, and the International Center for Academic Integrity’s day against contract cheating [was] on Oct. 17. It’s a technological arms race to stop cheating for the most common forms of assessment, exams and term papers. It’s going to be a cat and mouse game unless we change the game.

The biggest challenge, for me, is disaggregating assessment as learning and assessment for learning from assessment of learning. As long as grades exist, students will bend towards focusing more on the letter than the learning, and these structural limitations are intrinsically problematic. I enjoyed Dan Houck’s recent piece on his personal refusal to be part of this hopelessly reductive process, which is why we spend so much time in the opening chapters differentiating between assessment and marking. Houck expresses well the philosophical opposition to grading, and it’s hard but necessary to separate grading from that process. Within those structural limitations, however, we still think there is some improvement and progress that can be made.

As Dianne has said, assessment is frequently the last thing instructors want to think about. It’s subjective and inconsistent, and some of the most important higher-order skills, such as critical thinking, are hard to assess. It’s an exhausting process involving superhuman amounts of time, energy and effort. Just look at most faculty members at the end of a semester: assessment is the hardest work of teaching. Giving meaningful feedback to help students learn is hard and time-consuming, and students will always want and could always benefit from more feedback, and figuring out how to do this without exhausting yourself is a challenge.

When the online dimension is added on top of all this, it can make everything more difficult. That is why we think moving online is time for a reset and asking fundamental questions. What are we assessing? Why? How can student work provide the best evidence that learning has taken place? What tools are available that offer new alternatives? I love what Zawacki-Richter and Anderson (2014) say about teaching online: online instructors “bring many of the fears, inhibitions, and bewilderment of students when first exposed to the very different context of teaching in mediated and networked contexts.” Teaching online often puts experts in an uncomfortable place, and when that happens, they may not be in a place to be creative.

Q: Is there anything else from the book or your research that you want to emphasize?

Conrad: We are pleased with the book and its progression from basic beliefs through to application of authentic and engaging types of assessment. We wrote it that way to emphasize the importance of a sound philosophical and pedagogical foundation for one’s teaching practice. Once we understand what we believe in as far as what our purpose as teachers is, we can search for constructive ways to communicate that end goal to learners. Assessments can form a part of that strategy, when constructed to encourage authentic learning. The notion of “authenticity” is also very important, and the book dwells on that extensively. Authentic assessment, in part, prevents the learners from spitting out rehashed materials as it encourages a practical understanding and appreciation of real-life tasks or problems that learners can relate to. Authentic assessment invites (forces?) learner engagement with the task or question at hand. These principles are based on adult education principles -- and these foundational ideas are also outlined in the book.

Openo: One of the things we openly wonder about is how long we would write a book about assessment in “online learning contexts”? Could there ever be a second edition with this title, recognizing the great convergence between online, blended and face-to-face? Online learning is rapidly expanding. It’s not new anymore. It’s getting better, and as Hicks observes, we have reached a point in time where all faculty need to have a level of competence with online learning technologies. It’s not optional, and it’s very dynamic. So how will the terminology, which has always been problematic, continue to shift, merge and emerge?

Bates suggests that the terminology struggles to keep up with what’s happening, and that’s true. How long will these problematic terms remain with us, and will the language change before practices change? For a lot of personal reasons, I am reluctant to use the word “disrupt,” and we avoided hype in the book, but when/will/how new forms of credentialing and badging disrupt the current environment and bring more transparent evidence of student learning? It’s cliché, but writing the book raised more questions than answers, and for those who have been unhappy and dissatisfied with the way assessments have been done in postsecondary education, I think inventing new approaches is a very exciting area to be working in.

Next Story

Written By

More from Teaching & Learning