You have /5 articles left.
Sign up for a free account or log in.

As reported by Inside Higher Ed’s Doug Lederman, at the recently held Academic Resource Conference, a panel of higher ed “assessment pros” had a “harsh take” on assessment.

My response as a non-assessment pro was essentially, “Thank god, hallelujah, what’s next?”

As a frontline instructor, my role in the larger assessment regime has been largely pro-forma and somewhat mysterious. I have been asked to randomly collect artifacts that fit the “learning objectives” for the course – learning objectives imposed from somewhere above me[1]– and hand them over to some other body that does something to them, and then I do it again. 

Assessment as practiced at the department and institutional level could not have been less relevant to my day-to-day work. 

Are students improving at writing? The answer is yes. 

How do I know? Because I do…and because students say so. 

Take my word for it, except of course, taking my word for it is apparently not enough.

Unfortunately, as the panelists discussed, direct measurements conducted for the purpose of program accreditation or monitoring do a poor job of detecting what students have learned. Learning is simply too complicated to shoehorn into a single test to measure its presence or absence.

I believe this is particularly true in writing. A student who enters a course as a highly competent writer will likely exit the course producing the work of a highly competent writer, and examining their artifact in isolation tells us nothing of importance about the class or program itself. 

Additionally, with writing, learning often comes with a delayed fuse, a notion once planted may not detonate for months or even years. I have experienced this as both student and instructor. A measurement at the end of a semester may not reflect the learning that’s yet to come.

“What’s next?” is the most compelling question because assessment is important. Embracing assessment as a continuous process in my courses is what led me to the writing and publishing of my two recent books on teaching writing. If we don’t assess, we can’t improve. 

“What’s next?” is a difficult question, but I have some thoughts to throw into the ring as we consider assessing learning in general, and assessing learning to write in specific.

 

What We Shouldn’t Do

First, we must stop what we have been doing that assessment experts and non-experts alike agree isn’t working. Pro-forma activities meant to prove we’re doing assessment because accreditors require some system of assessment is a waste of everyone’s time. In the words of Molly Worthen writing last year in the New York Times, this is “misguided,” even as  The assessment professionals at the conference clearly agree.

It would also be a mistake to turn towards standardized assessments such as the CLA+, the tool which begat Academically Adrift which spawned a gospel repeated uncritically by journalists and others peddling a “college is broken” narrative. In reality, the CLA+ is an extremely limited diagnostic, and even the findings of Academically Adrift itself are open to complication and dispute.  

 

Measuring the Learning Atmosphere

When I think about the importance of atmosphere I’m reminded of a recent New York Times article by Erica L. Green on the I Promise School created by LeBron James in his Akron, Ohio hometown. I Promise School is a public school, populated with students who were previously thought to be “irredeemable.”

The school has focused on providing foundational support to both the students and their families, including food and counseling for a myriad of problems, but most importantly, the school seeks to create an atmosphere of love and support, of community, of self-belief. The growth in test scores is described as “extraordinary.” 

As the I Promise School story illustrates, the first step toward helping students learn is in providing an atmosphere in which learning is more likely to happen. We can (and do) measure this through straightforward things like class-size and teacher to student ratio. We can also measure how engaged and supported students feel about school.

We should also be measuring the barriers which stand between students and a good learning atmosphere. For example, The Hope Center seeks to quantify how many college students are threatened by food and/or housing insecurity. We should keep track of how many students either delay or decline to buy textbooks because of cost. We should monitor how many hours students are compelled to work outside of college in order to pay for school and living expenses.

We should be measuring student mental health. UC Berkeley researchers released preliminary findings which that the number of 18 to 26 year-old students who report having an anxiety disorder has doubled since 2008. Anxiety, depression, and other mental health challenges have obvious negative effects on the ability to learn. Recent work by PEW and the American College Health Association have found that school is a significant driver of student anxiety.

How much are students sleeping? How much access do they have to subjects like music, art, dance, and theater, which have been deemed peripheral and phased out, but which we know to be key to helping student development?

We can surely measure the learning atmosphere the same way we can measure the planet’s atmosphere. 

How conducive to life (and learning) are the places we send students to learn?

 

Big Picture Outcomes

Should we measure graduation rate? Salaries? Loan defaults?

How about happiness and well-being in addition to, or even instead of?

Recent research from Gallup found graduates who find “purpose” in their work are “almost 10x more likely to have overall wellbeing.”

Finding meaningful work is correlated not with major or grades, but experiences like having an internship, or a mentor who encouraged and helped set realistic expectations for the future during school. 

This brings us back to atmosphere. We can measure how many students are having the experiences which correlate with future job satisfaction and overall wellbeing. In fact, these are very easy things to count.

 

Course-level Outcomes

The best way to know if a student’s writing has improved is to ask them. Here’s the question I ask students, it’s very sophisticated:

Do you feel you’re a better/more confident writer than you were at the start of the semester? Why?

In my experience, students have an excellent sense of their own learning when it comes to learning to write. My students can articulate the many new strategies they’ve acquired for tackling new and unfamiliar writing experiences, as well as how they’ve gained insights into their own process and who to write for genuine audience and purpose. They know where they’ve improved and where additional improvement is still necessary. 

When given a chance to reflect on their experiences, students are wholly reliable on this front and the result is rich data which tells us a lot about what students are learning. 

We can measure these things and this measurement can lead us towards improving curriculum and instruction. The Meaningful Writing Project does this by simply asking students to articulate the writing assignment which was most meaningful to them and having them explain why it was meaningful.

Over time can give students more experiences with writing that they experience as meaningful, and therefore maximize their learning.

Assessment is very complicated if we insist on treating assessment as something that can coalesce into single, simple measurements.

Whatever assessment tools we employ in the future must acknowledge and work within those complications. For too long we’ve been trying to find a hack, a shortcut to prove learning is happening, but thus far, we’ve only come up with some really bad proxies, proxies that are so bad, they’re more misleading than they are illuminating.

This is an opportunity to do better. I hope we take it.

 

[1]The learning objectives are usually vague and unobjectionable and easy enough to attach to something I was planning on doing anyway.

Next Story

Written By