You have /5 articles left.
Sign up for a free account or log in.

The Voluntary System of Accountability was born six years ago as a defensive act, seeking to demonstrate to politicians and critics that colleges and universities were willing to show the public how they were performing in key ways.

But the two college groups that created the VSA, the Association of Public and Land-grant Universities and the American Association of State Colleges and Universities, are now revising it, in large part to encourage more public universities to participate.
 
The biggest change expands the ways in which institutions can report their student learning outcomes, to try to make it more useful for campuses themselves. But that adjustment seems unlikely to be significant enough to overcome the objections that have led scores of colleges to opt out of the voluntary reporting system, primarily because they did not like the system's dependence on standardized measures that allow for comparability across colleges.
 
And the discussion about the accountability system says a lot about what has changed -- and what has not -- since the debate about student learning outcomes roared into public view at the prodding of Education Secretary Margaret Spellings in the mid-2000s.

While campuses are engaged in significantly more activity around assessing how much their students are learning, the fundamental tension remains: Is measuring student learning important primarily to help professors teach better and students learn more, or to give students and parents more information about which institutions are performing better?  
 
The VSA Is Born

M. Peter McPherson was just three months into the presidency of the land-grant association when he told Spellings' Commission on the Future of Higher Education that public colleges would band together to create a way of informing the public about their performance in terms of graduation rates, graduates' plans, and the like. The Spellings Commission had been beating colleges and universities up for their perceived recalcitrance to be more precise and public about how well they were educating their students and otherwise fulfilling their obligations, and the panel appeared -- to some -- hell-bent on establishing a federally mandated system to force them to do so.

McPherson's proposal -- and the Voluntary System of Accountability's formal release a year later in the form of the College Portraits website -- seemed to take the steam out of the drive for a compulsory reporting scheme.

But the approach also received significant pushback from campus leaders and faculty groups that criticized its requirement that participating campuses use one of three mechanized measurements -- the Collegiate Assessment of Academic Proficiency (from ACT), the Measure of Academic Proficiency and Progress (from the Educational Testing Service, now called ETS Proficiency Profile) or the Collegiate Learning Assessment -- to report their progress in student learning.

In selecting those three tests and requiring participants to report the difference in how freshmen and seniors performed – to try to measure the “value added” contribution that institutions made to their students – critics said the associations were buying into the argument that student learning had to be measured in ways that were comparable across institutions to be useful to policy makers and the public.

"The university has concluded that using standardized tests on an institutional level as measures of student learning fails to recognize the diversity, breadth, and depth of discipline-specific knowledge and learning that takes place in colleges and universities today," the University of California’s then-president, Robert C. Dynes, wrote to the organizers of the accountability effort in 2007.

UC and its 10 campuses were among the highest-profile non-participants, but they were far from alone. About two-thirds of the roughly 500 members of the APLU and AASCU participated in the voluntary accountability system generally, and of those, only half reported scores using one of the three measures. The country's most prestigious public universities were disproportionately represented among those that opted out.

“The VSA continues to be unique among higher ed accountability systems in seeking to get substantial reporting on educational outcomes,” says McPherson, whose vow to the Spellings Commission helped him gain a reputation for political astuteness early in his tenure as a higher ed association leader. “But we want a larger group to report” their learning outcomes, he adds.

Why were relatively few institutions doing so? A 2012 analysis by the National Institute on Learning Outcomes Assessment of the VSA's four-year pilot project on student learning outcomes cited two potential reasons: the data presented are not particularly useful to users, and there was a lack of support within higher education for the the measures used in the VSA.

"We ... found that the standardized tests of student learning originally approved for inclusion in the pilot lack credibility and acceptance within a broad sweep of the higher education community which, in turn, serves to undermine institutional participation in the VSA," the report said. "Institutions participating in the VSA and other non-participating institutions would like to expand the number and nature of the student learning measures in order to more accurately portray student attainment and provide more useful and meaningful information for multiple audiences."

Stanley O. Ikenberry, a co-author of the report and former president of the University of Illinois and the American Council on Education, says the VSA had a huge political impact, in terms of showing that colleges and universities were not afraid of reporting on their performance. He also credits APLU and AASCU with "jumpstart[ing] the assessment movement across the broad scope of higher education."

Colleges and universities (and their faculties) are engaged in much more experimentation in terms of understanding and measuring their students' learning now than they were just a few years ago (and are much more sophisticated about it), and the VSA (and the push from the Spellings Commission) played major roles in that change.

But Ikenberry said the original vision of the VSA also embraced what he called an "overly simplistic" view (favored by many members of the Spellings Commission) that "we could find a single test score that could be comparable across the incredible diversity of institutions and students [in higher education] that would be meaningful.

"The evidence we've found since then is clear that that’s not a very realistic vision of what the field ought to be trying to accomplish," Ikenberry said.

Much better, the NILOA report argued, would be for the VSA to "expand the range of accepted assessment tools and approaches," such as portfolios and the Association of American Colleges & Universities' VALUE rubric, and to allow colleges to report data on student learning at the program level rather than solely at the institutional level, since that sort of information may be most helpful to students and to the colleges' faculty members themselves.

"[R]estricting the reporting of student learning outcomes to a test score may have led campuses to ignore the many other relevant indicators of student learning that might have been shared," the NILOA report said. "The design of the next version of the College Portrait template should serve an educational and advocacy role for alternative assessment methods that use authentic student work to make judgments about the quality of student learning."

Rebooting the VSA

In reworking the VSA to make it more useful to colleges and students, the public college groups have taken the advice from the NILOA report to heart -- to a point.

Beginning this year, public colleges and universities will have more options for fulfilling the VSA's student learning outcomes reporting requirement. They can continue to report "value added" scores for the three standardized measures (CLA, CAAP and ETS Proficiency), and they can also report senior-only scores for those three exams, as long as they compare the results to peer institutions.

In addition, they can, for the first time, report institution-level results for the AACU's VALUE rubrics for critical thinking or written communication, either comparing their own freshmen and seniors or for seniors only (benchmarked to peer institutions). Unlike the other tools included in the VSA, the VALUE rubrics are not standardized, though the results can be reported using a numerical scheme.
 
But that's as far as the groups were willing to go to accommodate those who want to be able to use measures (such as electronic portfolios or home-grown tools created by individual professors or departments) that focus more on what and how much students learn within a particular program or college alone, without some way of comparing those results beyond the institution's own borders, says Christine M. Keller, executive director of the VSA and associate vice president for academic affairs at the Association of Public and Land-grant Universities.

"We know a lot of institutions are focused on setting forth a set of learning objectives for their own institutions and designing measures to meet their own internal objectives, and that's entirely valid," Keller said. "But I don’t buy the argument that you can’t use outcomes measures for institutional accountability and for internal purposes, and we decided that going down to the program level, or allowing measures that could not be benchmarked against other institutions, would be going too far afield from the original intent" of VSA.

"Everyone we spoke with felt the VSA just can't go there -- it needs some kind of comparison or benchmarking," she said.

Some Won Over

VSA officials are hopeful that the inclusion of the VALUE rubrics will, as the NILOA report predicted, entice more colleges and universities to participate (or participate more fully) in the accountability system.
 
It is likely to do so in the University of Kansas' case, says Paul Klute, special assistant to the senior vice provost there. Kansas has experimented with the Collegiate Learning Assessment and ETS's proficiency profile, but found that neither of the metrics was "useful for describing student learning at the University of Kansas," mainly because of concerns about how representative the data were, Klute said. So while the university has participated in the VSA, it has not reported data for student learning during the pilot project, and would be unlikely to do so given the current requirements.

But the university has been pleased so far by its experimentation with the VALUE rubric for written communication, and in its own pilot project, "every department that teaches undergraduates used them in some way," he said. A majority used the rubric in its original form, some adapted it in minor ways, and the rest "threw it out and created their own."

Kansas officials plan to "roll up" to the institutional level the scores that instructors in its various departments assign to students based on how well they fulfill the requirements of the VALUE rubric in written communication. And those institutional scores, presumably, will be comparable to the institutional VALUE rubric scores that other participants report to the VSA -- allowing for the kind of comparability that APLU desires, but in a way that Kansas faculty can live with.

"There's a level of comfort that's there with this [form of assessment] that wasn't there with the others," said Klute. Among other things, he said, "there's less of an imposition that somebody's going to come in and commandeer your class to give this exam" that isn't necessarily connected to the material.
 
Progress, But Maybe Not Enough
 
Although the University of California has been a high-profile demurrer from the VSA, there are many things about the accountability system that the university likes -- so much so that it has copied many of them in its own accountability system, said Hilary Baxter, interim director for academic planning, programs and coordination for the UC system.

UC's main objection all along has been the attempt to standardize something -- the quality and degree of student learning -- that varies enormously from student to student and program to program, let alone university to university, Baxter said.  

Adding the VALUE rubric is a definite step in the right direction for VSA, Baxter said, "because it's a little more flexible, and welcomes faculty prerogative" more than the other methods VSA recognizes.

But Baxter left the clear impression -- though she declined to say so flatly, since that's ultimately a faculty decision at UC -- that the change would not be sufficient to draw the university into the VSA, primarily because UC officials would be disinclined to mandate use of one particular measurement tool, or to standardize reporting on it to make it possible to create one single institutional number.

"The faculty position on standardized approaches hasn’t changed," she said. "AACU rubrics can be a constructive tool, but having options for meaningfully assessing and reporting on student learning remains a prerogative that requires a certain amount of flexibility."

The Spellings View

Six years after her commission lit a fire under colleges on student learning outcomes, Margaret Spellings herself is pleased to see it still smoldering -- though she wouldn't mind if it were raging a bit more intensely.

"I'm glad to see VSA taking this step, and if it gets more people interested in participating, it's a good thing," the former education secretary said in an interview.

Spellings said she recognized that student learning is a complex endeavor, and that it would be a mistake to "insist on one and only one way to evaluate it." After all, she said, even in gauging the quality of restaurants, "we have Zagat's, and we have AAA, and people can use the one they prefer."

But ultimately, Spellings said, the goal has to be to develop "tools and metrics, multiple or otherwise, that are comprehensible and actionable for consumers and policy makers," "to reassure them that institutions are doing what we need them to be doing."

"And we’re still," she added, "a good distance away from that."

Next Story

Written By

More from Learning & Assessment