You have /5 articles left.
Sign up for a free account or log in.

When researchers talk about college access, there’s no question which issues they’re discussing, nor is there any lack of data to grapple with. But when it comes to a related topic -- success once students make it to college -- it’s not as clear what policy makers are referring to, or even if the term itself is meaningful.

What is “college success”? A group of presidents and administrators grappled with the question on Monday at a panel assembled to coincide with the release of a new book, College Success: What It Means and How to Make It Happen, published this week by the College Board. Juan Williams of National Public Radio moderated the discussion, which featured the anthology’s editors, Michael S. McPherson and Morton Owen Schapiro; the president of Miami Dade College’s Homestead campus, Jeanne Jacobs; and Linda M. Clement, vice president for student affairs at the University of Maryland at College Park.

Defining college success is a problem in the first place, agree the book’s authors, because institutions of higher learning are so diverse, in terms not only of size, mission and location, but of the types of students who enroll and of their goals once they graduate. And unlike college access issues, looking at success in and after college requires statistics that in many cases don’t exist -- or metrics whose utility may be disputed.

“What college success means depends so much on what [kind of] college you’re talking about and what students you’re talking about,” said McPherson, who is president of the Spencer Foundation and former president of Macalester College. He suggested that the best measures for college success would be specific but tailored to individual institutions. What that means, precisely, “each one can answer that question for themselves,” he said.

The panel was conspicuously divided into two halves: on one side sat McPherson and Schapiro, the president of Williams College; on the other were two representatives of public institutions whose students are much more likely to come from disadvantaged backgrounds and rely on financial aid. Williams, the moderator, didn’t hesitate to point out that the administrators from Miami-Dade and the University of Maryland seemed more willing to embrace strict accountability measures and the data collection that approach requires.

Schapiro, also an economist, suggested that there might be “some appetite” among faculty for more in-house accountability measures, but explained that much of the resistance stems from a fear that increased empiricism could lead to a one-size-fits-all testing regime -- like a No Child Left Behind for higher education.

He stressed the need to more rigorously link what colleges do to their students’ professional and other outcomes after they graduate. Otherwise, it’s impossible to tell which teaching methods work and which don’t. Schapiro brought up a hypothetical proposal to compare students’ incoming SAT scores with outgoing GRE scores to determine whether they improved (and presumably correlate those scores to majors and other factors during the college experience).

“I would do that, but then again, I’m an empirical economist,” he said. Professors in the English department, he imagined, would view it as “heresy.”

When colleges experiment with different ways to teach critical thinking skills, as Williams does, Schapiro said, it should be seen as necessary to then empirically test what worked the best. Higher education is “horribly bad at this,” McPherson said -- to take one example, colleges tinker with class sizes all the time -- but they “never, ever look at the results.”

“Even at Williams, there’s not as much of an appetite as there should be,” Schapiro said.

On the other side of the table, a different story was being told. At Miami Dade, which has some 165,000 students on eight campuses, a “culture of assessment” has taken root, Jacobs said. When less than 50 percent of its students speak English as their native language, 58 percent are from low-income families and 87 percent are minorities, “success” can be a much more straightforward concept. Are students graduating? Are they being hired? Are employers happy with the graduates produced at the college?

At Miami Dade, that means not only evaluating students’ critical thinking, quantitative and other skills; it means working with employers and other community stakeholders to assess the ability of the college’s graduates to succeed after graduating.

But even diligently gathered data doesn’t necessarily answer why some students are more likely to graduate than others, and why gaps in achievement persist between various groups. In one of the book’s chapters, by Sarah E. Turner, a professor of education and economics at the University of Virginia, the data are presented as “relatively uncontroversial, if somewhat sobering.” Survey data she cites, for example, find that the percentage of students completing a degree has decreased, while among those who do earn a B.A., a greater proportion is taking more than four years to graduate.

Still, Turner writes, “the social science task of explaining differences among individuals and across collegiate institutions is a daunting challenge.” Are the differences due to the kinds of students attracted to certain institutions, or the institutions’ resources themselves?

The chapter doesn’t come to a conclusion and, like much of the book, it reiterates the need for further research. Like the members of the panel, however, Turner notes that “there is much to be said for transparency and accountability in outcome measures…. Such information holds the promise of not only serving as an important research tool, but also as a tool to improve the choices of students and their families.”

Next Story

Written By

More from News