You have /5 articles left.
Sign up for a free account or log in.
A new correspondent writes:
I'm a math instructor at a community college in California, and recent legislation is forcing our hands with curriculum design and placement. We will soon be required to place all students into a transfer-level math class, and we will only be able to require developmental coursework if we can provide evidence that the student would be "highly unlikely to succeed" in the transfer-level class. Now, I'm all for providing shortened pathways and using multiple measures in placement to place students in higher-level classes if they're likely to succeed in them, but I cannot pretend to believe that most students who can't do basic arithmetic would succeed if placed directly into Statistics or Pre-Calculus, even under a corequisite model.
Studies have shown that many students would do better if placed directly into transfer-level courses; and I'm happy to do that. However, they also show that a significant portion of the population (sometimes nearly 50%) still fails to succeed under those circumstances. I would like us to continue to provide two- or three-semester paths for the sorts of students who would drown if thrown directly into the deep end. But with this new legislation, we may be asked to justify their existence.
Do you or your wise and worldly readers have experience with this? When is shoving students into a more advanced class just too much?
This may seem roundabout, but I promise I’m going somewhere with this.
Alternatives I’ve seen to traditional remediation:
Multi-Factor Placement. I’m a big fan of this one, which involves looking at indicators beyond just the Accuplacer score. Selective colleges have known for years that four years of high school performance will give you a better indication of future performance than one day of a standardized test; I don’t know why the same logic wouldn’t apply here. The trick, for many community colleges, is that we’ve never developed the infrastructure to evaluate transcripts en masse. We never had to. When every high school in the county calculates grades differently, there’s a bit of a learning curve. Still, once the implementation details get nailed down, I see a lot of upside to this one.
Co-Requisite. This is the ALP model in English. The idea is to rethink remediation as just-in-time support for a college-level class. It has worked well in English, even though the small class sizes make it hideously expensive.
Self-Report - I saw John Hetts present on this a year or two ago. Some California schools use high school GPA for placement, but they rely on students to self-report their GPA’s. It struck me as a bit...trusting...but apparently the schools that have tried it have had good results.
Non-Credit - Some colleges have moved lower levels of remediation out of the curriculum entirely, handing it over to the non-credit side of the college. That’s different from just saying that the credits don’t count; it actually removes them from the semester schedule and from financial aid. The idea is to allow for more flexible schedules -- why do 16 weeks when 6 would do? -- and to conserve students’ Pell allocations. I’ll admit being intrigued by this.
Self-Paced - This is sometimes called the “emporium” model. It’s mostly used in math. The idea is to use technology to allow student self-pacing, with faculty present as resources to help students when they get stuck. My previous college did this for a while, with mixed results. Some students sped up and got through more quickly, as intended; that was the goal. Some took more-or-less the same amount of time they otherwise would have. But a plurality of them actually slowed down. It’s unclear, at this point, whether that was because they were actually shoring up their skills, or if they were just postponing the inevitable.
Biology/Chemistry -- In a couple of settings, I’ve seen remedial classes in lab sciences, such as biology and chemistry. The idea, I think, is to acquaint students who somehow missed it in high school with basic lab skills and the scientific method. The one time I saw it tried, it did not go well. Given what we now know about remedial courses and their effect on degree completion, I’m skeptical of this one. With all good intentions, it strikes me as likely to do more harm than good.
The core of the question is about more precise placement, and other than self-reporting, I don’t know that anybody has cracked that nut.
I’d be wary of classroom-level anecdote, though, and here’s why. (I’ll use made-up numbers to illustrate the point.) Compare the traditional system to a waiver-based system:
Traditional system:
100 students start in 1st level remedial math.
70 pass. 60 come back and take 2nd level remedial.
45 pass. 40 return and take college-level.
30 pass that.
Waiver-based system:
100 students start at college-level.
60 pass.
From the perspective of the instructor of the college-level class, the waiver-based system is an obvious failure; the pass rate in her class went from 75 percent (30 out of 40) down to 60 percent. Probably, some of those additional students who failed were badly overmatched. What could the administration possibly be thinking?
From the perspective of the institution, though, the waiver-based system is a raging success: the percentage of students who passed a college-level math class went from 30 to 60, and they used less financial aid to do it. Why would anyone oppose such an obvious good?
That disconnect leads to mistrust and frustration on both sides. Some faculty wonder why the administration is so out-of-touch as to put unprepared students in a class. Some administrators wonder why the faculty are so intransigent about defending a system that fails so many students. Depending on your starting point, both are kind of right.
That’s why I’m reluctant to defer entirely to classroom anecdote. Even with the purest of motives and the best of faith, that angle misses a key part of what’s going on. A professor who complains that the quality of the students in the college-level class was watered-down is right, as far as that goes, but she misses the fact that more students got through. The 30 who passed in the second system and not the first are materially better off they otherwise would have been. That difference may be invisible at the level of the individual classroom, but it’s real, and it matters.
My sense of it, for what it’s worth, is that we need to be willing to admit that the standard model that emerged over the last few decades isn’t terribly effective, and that we don’t yet know what the ideal model would be. To my mind, that calls for widespread experimentation. (Some states are helping, even if they don’t mean to, via legislative micromanaging.) In other words, I don’t know the answer, but I’m glad we’re finally asking the right questions.
Good luck managing a legislative mandate. Whatever else happens, I prefer to avoid those.
Wise and worldly readers, what do you think? Is there a surefire way to aim remediation only at the students who actually need it?
Have a question? Ask the Administrator at deandad (at) gmail (dot) com.