You have /5 articles left.
Sign up for a free account or log in.
Getty Images
No concept is arguably more popular in higher education policy today or seems to have broader consensus than institutional “skin in the game”: the idea that colleges need to be on some sort of financial hook when their students don’t succeed.
Students and families are spending near-record amounts on postsecondary training, yet students are dropping out and defaulting on loans at disturbingly high rates. Mix in high-profile collapses like Corinthian Colleges and near-daily stories of college graduates struggling to find employment, and we get policy makers coming to the disheartening conclusion that our higher education institutions are incapable of doing the very thing we expect of them -- creating capable graduates -- unless threatened with financial sanctions.
Yet is this really the case? Colleges spend a lot to recruit and retain students, and every one that leaves without completing represents lost time, money and effort that require more recruitment and retention dollars to replace him or her. Students who don’t finish or who complete but struggle to find employment create nothing but negative reputational outcomes that institutions must invest both time and resources to counteract. Plus, when those same students leave with loan debt and struggle to repay it, the institution may yet again spend dollars and effort on default prevention services.
Put it all together and it’s pretty clear that when students fall off a successful education path, institutions pay a very real financial price. But this is exactly what having skin in the game entails. So why are we pushing for policy and regulation to accomplish what’s already taking place?
Making colleges pay a second time for poor outcomes doesn’t make much sense, although critics will say that market-driven financial penalties are obviously just not doing enough to change institutional behavior. To believe that, however, we have to believe institutions, as producers, actually prefer to see some of their education outputs fail.
That’s awfully strange. If institutions could control how much students learn, then why would they consciously choose to send unprepared graduates into the labor market where they struggle to find and keep employment? And if they could control who graduates and who doesn’t, what economic rationale do they have for producing a mix of graduates and dropouts? If they really had a choice, why would they ever produce anything other than graduates?
Colleges today face a continuous barrage of criticism about whether they provide value for money, and so we’re left to ask under what circumstances colleges that capably control learning, degree completion and postgraduate employment outcomes would actually opt to produce substandard products. Does a business approach that thrives on threats of greater regulatory scrutiny exist? Does a “student failure” model bringing about additional enrollment management, default prevention and reputational costs make operational sense?
It’s pretty obvious that if institutions could control the types of outcomes that skin-in-the-game proposals wanted to see improvements on they’d already be doing so. What colleges and universities wouldn’t benefit from high graduation rates, stellar job placement statistics and graduates who earned enough money to comfortably pay off their student loans?
It’s also why the argument that the financial costs institutions already face just aren’t harsh enough doesn’t make much sense. It’s like suggesting that my dog doesn’t speak English because I’m just not spending enough time teaching him. The outcome and process we’re trying to link don’t fit the way we think they do.
What’s missing from the equation is the idea that academic success is a two-way street where students’ academic preparation, motivation and effort do as much to shape the outcomes we care about as the resources institutions provide them. In its absence, the obvious consequences of policies that only hold colleges accountable for outcomes that they share control over is that they put their effort into the things they can control -- which, in this case, is picking students they think are most likely to succeed.
All of this means that the losers from skin-in-the-game proposals end up being students who have less academic preparation and who come from underresourced school districts. We actually end up creating undermatching by putting greater pressure on colleges to pick “winners” and discouraging them from taking chances on individuals who may benefit the most from the type of education they offer.
It’s also likely to hurt institutions with open admissions policies and that currently enroll larger percentages of minority and nontraditional students. Community colleges, with their limited state budgets and high transfer rates, would suffer most, but so would any college drawing large populations of students from disadvantaged communities. In the long run, those institutions could face unsustainable financial and reputational costs.
There’s certainly a place for risk sharing in higher education, which is why institutions currently pay the real financial costs I described earlier. But if what we care about is making institutions more responsive to students’ long-run needs and expectations, then the solution lies in policies and practices that make such goals their focus.
Income-Share Agreements (ISAs) -- whereby colleges finance their students’ education in return for a fractional share of those students’ future income -- are a good example. They create not only financial penalties but also financial rewards for institutions that help students achieve long-term, sustained success. Driving more institutional revenues through ISA-style agreements also discourages the kinds of deceptive marketing practices that policy makers believe institutions engage in since colleges and universities would, over time, end up having to financially absorb the costs of misrepresenting their programs’ job placement prospects.
The fact is that it’s easy to think that simply imposing penalties on bad actors will fix the problem, yet the logic has to be there to justify the approach. The basis on which risk sharing proposals today are being crafted doesn’t meet the standards of sound policy. We owe it to both colleges and students to craft policies that work toward, not against, the system’s overall objectives.