You have /5 articles left.
Sign up for a free account or log in.
Listen to some critics of higher education, and you'll hear constant calls for more accountability and assessment. Colleges don't do enough to figure out whether students actually learn anything, the line goes, whether there is any "value added." Several hundred academic administrators from around the country heard a version of this from Education Secretary Margaret Spellings, when she told their meeting: "In higher education, we've invested tens of billions of taxpayer dollars over the years and basically just hoped for the best."
The Spellings text was largely a rehash of statements she and others have made about why accountability and assessment matter. Earlier in the day, however, attendees at the National Symposium on Postsecondary Student Success heard a very different presentation about assessment. Three administrators of public universities -- none of them flagships -- spoke about how assessment actually works on their campuses. While their experiences vary, they agreed that in key respects, the policy question of "do colleges need assessment?" misses the point. Their message: There's much more assessment going on than policy makers seem to realize, making assessment really matter is much more difficult than talking about why we should have assessment, and quantitative measures that purport to show "value added" may or may not do so.
"All these national calls would make you think nothing is happening," but that's not true, said Jon Young, senior associate vice chancellor for academic affairs at Fayetteville State University.
To illustrate the point that plenty of assessment already takes place, Howard Cohen, chancellor of Purdue University Calumet, described the major assessment efforts on his campus. The 9,300 students at Cohen's campus are in many pre-professional programs -- and those programs tend to have specialized accreditors, all of which have revamped standards in recent years to require evidence of student learning. At his institution, that means major studies on student learning in all of the top majors: engineering, management, nursing, education, technology and therapy.
Then there is Purdue's regional accreditor, the North Central Association of Colleges and Schools. North Central offers members the chance for an alternative accreditation review through its Academic Quality Improvement Program, which replaces periodic reviews with more regular sessions, all focused on showing improvements in outcomes. Purdue Calumet participates in that. Plus the university maintains 20 advisory boards of business and community leaders who employ graduates in various fields. Each advisory board is regularly surveyed about the how the university's alumni perform. All in all, Cohen stressed, there is a whole lot of assessment going on.
At Fayetteville State, a historically black university in North Carolina, "assessment and value added have been part of our culture for a long time," said Young. He said that's frequently the case at black colleges, which take pride in admitting students who may not have received great high school educations, but who are able to develop and thrive. The value of black colleges is evident, Young said, not by things like average test scores coming in, but by the growth that takes place during students' time in college.
Among the assessment tools used at Fayetteville State -- most of them for some time, Young said -- are the National Survey of Student Engagement, institutional surveys of students and alumni, the Noel-Levitz Student Satisfaction Inventory, and a "rising junior exam" that students take at the end of their sophomore years to show that they have reached certain competency levels.
Young said that policy makers calling for the collection of more data need to slow down and realize how much data is already being collected. "We're good at collecting data," Young said. "But we can also collect data, nod our heads, and just move back to our business" as it was before.
That raised the issue that many said was far more crucial than whether one test or another was required: how to get buy-in, from students and faculty members, to take assessment seriously. The administrators argued for a mix of carrots and promises not to use certain sticks as the best way to do this.
John D. Welty, president of California State University at Fresno, said that his institution has been engaged in rigorous assessment since 1998, with the programs evolving along the way. At the beginning, professors were encouraged to participate with an offer from the administration: Five-year program reviews, which were "much hated" by professors, could be skipped if departments adopted systems to measure student learning. Faculty members also needed more assurances, he said. As a result, the administration pledged not to make the results of assessment measures part of the budgeting process, so departments didn't need to worry that an honest (and potentially harsh) look at a department would just lead to a program being killed.
Looking back, Welty said this was a good move to make -- and that departments are generally taking assessment seriously. "We allowed faculty to embrace assessment," he said, rather than forcing it on them.
Now, Fresno is looking at whether students have the right motivations. For instance, the university wants to expand use of the Collegiate Learning Assessment, but also wants to make sure students take it seriously. So Fresno is looking at paying students to take the test, as some other campuses do, Welty said.
Assuming faculty and students are on board, there's also the question of whether measuring student learning necessarily answers the questions people want to know about the value of higher education.
Young, for example, said that at Fayetteville State, the top conclusion of most assessment tools is "something we already knew," namely that many students arrive who are poorly prepared. Other assessment tools show that students are learning, so from a "value added" perspective, the university should be praised. But what he thinks Fayetteville State really should be praised for is the many cases where it not only raises student knowledge levels, but raises them to appropriate levels, not just achieving some value added milestone.
"We cannot substitute value added for basic competencies," he said.
Purdue's Cohen said that there were other problems with "value added" as a measure to assess. If you have two sections of an introductory math course, populate them with similar students, and use different teaching techniques, you can probably make conclusions (if the samples are large enough) on which technique is more effective. What Cohen worries about, however, is the idea that you can measure the "value added" in college generally, when so much of the experience of students is beyond the control of colleges.
He noted, for example, that students transfer in and out -- are all of their knowledge gains presumed to be "added" at any one institution or at all of them? If a student at your university takes one or more courses online from another institution, how is that impact measured? If -- as is the case at Purdue Calumet -- 80 percent of students work at jobs at least 20 hours a week, with many of those jobs related to their education, don't students learn something at those jobs?
Cohen stressed that he wasn't saying that colleges shouldn't think about what they do, or what students learn. But unless "the scope is sufficiently narrow," he said he doubted all of the assessments currently being used would really tell people what value is added in college.