You have /5 articles left.
Sign up for a free account or log in.

Apparently, everyone believes in assessment. Taxpayers, lawmakers and yes, educators: Just ask a university administrator for his or her assessment strategy and you'll likely receive a mouthful about institutional goals and outcomes measures and, of course, standards.

"It seems that we have an epidemic of standards nowadays," observed James DePaepe, director of the Office of Research and Evaluation at Central Washington University.

Yet piling on standards, and scooping up reams of data, won't necessarily amount to a coordinated, coherent, institution-wide assessment strategy that actually supports a college's mission, DePaepe and others gathered to speak about the subject on Wednesday agreed. Too much data, in fact, can be a bad thing, he suggested, amounting to a "shotgun approach to assessment."

In a session at the annual BbWorld conference hosted by the course management giant Blackboard, several administrators familiar with the two A's -- assessment and accreditation -- discussed how well-organized data gleaned from the former can help support efforts to gain (and preserve) the latter. Since accreditation is often one end goal of efforts to routinely and systematically evaluate students' (and professors') performance, the question repeated during the course of the session seemed fitting: "Does your organization have a culture of assessment?"

What such a culture means, in theory, is fairly easily defined, and panelists had no trouble doing so: Collect data from different sources; use multiple assessments to create as many data points as possible; evaluate both alumni and employer satisfaction once students graduate; institute program reviews; and the like. In terms of organizing such a culture of assessment, it seemed clear that faculty should have a central role in planning and evaluating programs; that standards clearly align with each other and with accreditation requirements; and that measures are internally consistent.

All that, said Richard Laramy, the president and founder of TTC Consulting, sounds like a "well-defined process. What actually occurs within the institution?”

There's the rub, DePaepe implied. Too many institutions have "myopic channels" whose efforts are not coordinated or whose responsibilities are diffused. "We believe in institutions, as faculty and administrators, that ‘someone else’ is collecting the data or ‘everyone else’ is collecting the data," he pointed out. And when that's the case, who needs to worry about making it happen?

The answer, said David Harvey, a professor and coordinator of Geneva College's school counseling program, is "broad buy-in" across the institution. Before an accreditation site visit, a college needs to know that it will be ready with any kind of data it's asked for -- and that requires a systemwide approach, illustrating how specific goals were achieved (or altered) because of assessment data.

Willis Walter, vice president for research, planning and accreditation at Bethune-Cookman University, showed how it's done at his institution.

"We have a plethora of hardworking staff that are spread all around the university that’s collecting this data on our behalf from either the admissions office, registrar’s office even our graduate programs, and we track down students all the time who graduated 10-15 years prior so that we can keep up with our needs," he said.

That's necessary, he continued, because "most of the time we have short turnaround because you call me on Monday and ask me to give it to you on Thursday."

"I think the key element is who’s doing what? Who’s collecting what information and having that mapped out early."

Once collected, the data shouldn't just sit around: The information should be put into the service of an institution's mission, the panelists suggested. "The most important thing after collecting data is writing some sort of report.... [H]ow well has counseling changed over the last 5-7 years since the last visit and can you substantiate that by showing that you collected the data in a systematic way and then made some programmatic decisions because of those data?" said DePaepe, who is also an evaluator for the National Council for Accreditation of Teacher Education.

This being a conference organized by Blackboard, it wasn't surprising that the company's suite of learning tools entered the discussion. But the point was well taken: If assessment is to be organized systematically throughout an institution, it should work through software that everyone uses and that can be controlled centrally.

One way to work assessment into the overall course management paradigm is to start from the beginning by working with rubrics, the panelists said -- to, as Walter put it, "make the qualifiable quantifiable."

"It’s very difficult to assign a grade to a standard," DePaepe said. "But in a rubric it’s very easy to put all the evidence of how to meet these things [up front]. [A]s a reviewer, I can say, oh, these are the standards that are aligned.... So I can see the evidence right there without seeing something that might be contrived through grades, and we all know that there’s some subjectivity to all grading but there’s some evidence of grade inflation that shows us whether we need the standard or not."

So, in the end, DePaepe repeated the mantra that "a culture of assessment" is necessary. If most of what happens in Vegas stays in Vegas, he said of the city hosting the events, then at least take that nugget back to campus.

Real-Time Data Mining for Student Success

Assessments at the end of a course, or starting with the first exam, may be enough to satisfy accreditors. But they might not, on their own, be able to help identify struggling students before it's too late. At another session on Wednesday, representatives from Purdue University summarized a pilot effort to find those who, while good students in high school, might not be prepared for the level of rigor of a major introductory-level college course. In short, an early warning system.

Historically, the university used little more than bubble sheets to collect and evaluate students' quiz scores in search of those who might need extra attention. With Blackboard, they were able to take the warning system to a higher technological plane by building a predicted student success model -- essentially, tracking their use of the online learning platform, how often they attended office hours, grades and other metrics -- and contacting them as early as possible (and in increasingly stern tones) to urge them to seek more help.

The results so far -- statistically significant within the limited sample size -- are that students at a moderate level of risk improved the greatest, with more "help-seeking" behavior, said Kim Arnold, an educational assessment and evaluation specialist at Purdue. There were more B's and C's and fewer D's and F's, she added, but also more course drops and withdrawals, suggesting that some students decided that perhaps they weren't prepared for the work after all.

Next Story

Written By

More from News