You have /5 articles left.
Sign up for a free account or log in.
pixelshot
There is an informed debate taking place in living rooms, courtrooms and newsrooms about the value of a college education, the utility of standardized tests and higher education’s mission to serve the public good. Unfortunately, in his article “The Misguided War on the SAT,” New York Times columnist David Leonhardt opted not to participate in this discussion. Instead, drawing on research from Harvard University’s Opportunity Insights group, Leonhardt published a loving ode to elitism and the SAT disguised as informed reporting.
Myriad technical, logical and practical problems in Leonhardt’s article have been addressed by experts, including Jake Vigdor, a professor of public policy and governance at the University of Washington; Jon Boeckenstedt, vice provost of enrollment management at Oregon State University; and Jesse Rothstein, a professor of public policy and economics at the University of California, Berkeley. But it’s still worth addressing a few of the larger issues the article brings to light.
What Is Optional?
Leonhart’s argument against test-optional policies suggests that these policies prevent students from benefiting from a strong score, but this isn’t true. The only colleges that refuse to consider test scores are those with test-free policies, but the 3 to 4 percent of colleges with test-free policies have not been the target of ire from advocates for standardized testing.
The attention of those decrying the “war on the SAT” has squarely been put on test-optional colleges. Yet, paradoxically, their arguments frame test optional as restricting a student’s ability to capitalize on a good score. This is either logically dishonest or a gross misinterpretation.
Test optional is exactly what it says—optional. A test-optional policy means the college has chosen to allow applicants the choice of whether to submit a score or not. They provide an applicant the option of whether to participate in test prep, whether to take a standardized test and whether to submit a score. At each step along the way, test-optional policies empower students to make informed choices about how to spend their time and how to display their abilities. If a student is a strong test taker or particularly proud of their test performance, they can submit that score.
At Bowdoin College, which has been test optional since 1969, 36 percent of students who enrolled in 2022 submitted a SAT score, and another 21 percent submitted an ACT (a combined total of 57 percent); the combined figure was 62 percent at Williams College, 76 percent at Rice University, 19 percent at Trinity College in Connecticut and 44 percent at Northeastern University. Clearly, none of those institutions stopped students from taking advantage of a score they were proud of. Suggesting otherwise is suggesting that universities are lying about either their practices or the data they have submitted to the federal government.
All the benefits of testing continue to exist in a test-optional environment, though critics of the policy desperately want to pretend they do not. A test-optional policy simply reduces the importance of testing and puts it on equal footing with other optional elements in applications, like Advanced Placement classes, essays, interviews, extracurricular activities, donating a building or claiming genetic affiliation with the institution.
The ire directed at the optional testing but not other optional elements of the application process should raise questions.
A Narrow Definition of Success
Does creating winners and losers serve the country? Or does it merely perpetuate inequality and exclusion?
A core assumption that seems to undergird many of the arguments against test-optional policies—and against diversity, equity and inclusion; race-conscious admissions; and affirmative action— is that the purpose of college is to rank and sort members of society. And the tools for ranking and sorting should exist unquestioned in perpetuity.
Those dedicated to this sorting tend to believe in standardized tests. They tend to look not at whom the tests hurt or what the tests miss but instead their sorting power. Research from the College Board and from Harvard’s Opportunity Insights group both take this approach, correlating SAT performance to measures of success without defining whether these measures are meaningful or good.
The College Board says that a below-average SAT score (less than 1000) predicts a B first-year college GPA (3.19) and a perfect score predicts a GPA 0.65 points better (3.84). The Opportunity Insights paper discussed in Leonhardt’s article reports that at the 12 Ivy-plus colleges, students with perfect SAT scores earned a GPA 0.43 points better than those who scored a 1200.
But even if there is a meaningful reason for being concerned with who gets a B or A in the first year of college, framing this as success is disingenuous at best. Even if some colleges find it useful to engage in this sorting, why would this be in the interest of all colleges? Why would colleges whose mission is to serve the public good follow this model?
This seems more of an exercise in unnecessary sorting than any meaningful measure of potential for success. It’s only by very narrowly defining success that you can claim that the practice of extreme ranking and sorting has value.
Leonhardt makes just such an argument, writing that these colleges “want to identify and educate the students most likely to excel. These students, in turn, can produce cutting-edge scientific research that will cure diseases and accelerate the world’s transition to clean energy.” But there isn’t evidence to support that attending these institutions creates such students. In fact, some evidence actually suggests that attending these exclusive colleges is more likely to squash the motivation to engage in cutting-edge research and encourage the singular pursuit of personal wealth.
Even the research that Leonhardt cites—finding that SAT/ACT scores are predictive of postcollege success—defines success as a circular replication of inherited wealth and status. That Opportunity Insights research defines success for students coming from Ivy-plus colleges in terms of whether they go on to attend an Ivy-plus or other “elite” graduate school or work for an employer that hires a large percentage of Ivy-plus graduates.
This would seem to be more in line with the 1800s model of higher education, in which colleges were effectively finishing schools for the sons of wealth and privilege.
A Propaganda Battle
Unfortunately, debates over the value of standardized tests are frequently characterized by cherry-picking of data, misquoting and hyperbolizing of the opposing position, and generalizing based on anecdotes.
For example, claiming that colleges are remaining test optional “for diversity” ignores that almost every university that has changed its testing policy has cited multiple reasons for the change.
Leonhardt goes further, to question the motives and understanding of data by admissions professionals. To make his arguments, he conveniently ignores data and statements from colleges he ostensibly deems unimportant.
When the University of Denver announced its test-optional policy in 2019, Todd Rinehart, vice chancellor for enrollment, wrote, “We want to place our focus on curriculum and performance in school and provide students the choice as to how their academic record is presented.”
Similarly, Scott Friedhoff, then the vice president for enrollment at the College of Wooster, wrote, in 2020, “Tests only provide a very small amount of additional information on student readiness for Wooster. For nearly all students, high school grades and coursework provide our admissions team all we need on academic preparedness. A team of our own students from Wooster’s Applied Methods and Research Experience (AMRE) confirmed this in a recent validity study.”
A 2022 report from University of Tennessee concluded, in part, that the “ACT only adds predictive value in the top few [high school GPA] deciles, which is largely unhelpful for admissions decisions.” Data from some other public colleges also provides further evidence that testing offers them little value: for example, the Iowa Board of Regents concluded in 2022 that “the likelihood of graduating in four years was fairly consistent based on GPA, irrespective of the ACT score level.”
But test advocates tend to be uninterested in considering data from the colleges that serve the most students.
The research report that Leonhardt relies heavily on even cautions “that our analysis applies only to Ivy-plus applicants, and the predictive power of test scores and GPAs may differ in other settings.”
Despite this caution, Leonhardt implies that policies from these colleges should be applied broadly, writing, “Given the data, why haven’t colleges reinstated their test requirements?” A chart illustrating the article, citing the Opportunity Insights research, proclaims, “Test scores are strong predictors of student outcomes after college.”
It’s only in the 37th paragraph that Leonhardt acknowledges the research caution, writing, “The SAT debate really comes down to dozens of elite colleges, like Harvard, M.I.T., Williams, Carleton, U.C.L.A. and the University of Michigan.”
Any full conversation about test scores and admissions policies should not only mention the Massachusetts Institute of Technology reinstating a test requirement (as Leonhardt does), but also the California Institute of Technology remaining test free (as Leonhardt doesn’t). It should mention Georgetown University, which not only requires standardized tests but requires applicants to submit all test scores, as well as New York University, which was test flexible prior to the pandemic and has been test optional since.
And if you’re going to write about Opportunity Insights’ research, why not mention the research group’s important findings on the influence of wealth on admission at selective colleges, which have been covered extensively in the Times?
Ignoring this context seems intentional and prejudicial.
The issue with standardized tests isn’t whether they measure some academic skills or provide a distinction between two candidates, but whether the skills and differences are statistically meaningful and worth the cost of the testing regime.
Addressing complex educational problems and questions requires recognition that numbers are not data, data is not understanding and understanding is not knowledge.
Those arguing for reinstatement of narrow definitions of merit, readiness and ability seem dedicated to stripping higher education of its variety and restricting student choice.
In this moment of educational and political turmoil, we’re best served by raising the level of discourse, not repeating talking points and putting more power in the hands of nontransparent, unelected, unregulated test publishing corporations and colleges with billion-dollar-plus endowments.