You have /5 articles left.
Sign up for a free account or log in.

A new study has found that two of the four main parts of the ACT -- science and reading -- have "little or no" ability to help colleges predict whether applicants will succeed.

The analysis also found that the other two parts -- English and mathematics -- are "highly predictive" of college success. But because most colleges rely on the composite ACT score, rather than individual subject scores, the value of the entire exam is questioned by the study.

"By introducing noise that obscures the predictive validity of the ACT exam, the reading and science tests cause students to be inefficiently matched to schools, admitted to schools that may be too demanding -- or too easy -- for their levels of ability," says the paper released Monday by the National Bureau of Economic Research (abstract available here).

ACT officials said that they were still studying the paper, of which they were unaware until Monday. But they defended the value of all parts of the test.

The authors of the paper are Eric P. Bettinger, associate professor of education at Stanford University; Brent J. Evans, a doctoral student in higher education at Stanford; and Devin G. Pope, an assistant professor at the business school of the University of Chicago. At a time when the ACT has grown in popularity such that it has roughly equal market share to the SAT's, the authors write that misuse of ACT data could hinder efforts to raise college completion rates.

The research is based on a database with information about every student who enrolled at a four-year public university in Ohio in 1999. The authors obtained information about high school and college grades -- and found their results consistent for students of different skill levels and for those who enrolled in colleges with different levels of difficulty in winning admission. (For comparative purposes, the authors also used data on students who enrolled in a private Western institution, Brigham Young University, and found the same patterns.)

The authors note that because colleges get the score breakdowns and the composite scores, there is nothing to prevent admissions officers from considering only some parts of the ACT, or even of weighting the different parts of the test in different ways. But they found that, overwhelmingly, colleges fail to do so and instead rely on a composite score that the authors find anything but reliable. As part of their study, the researchers compared students who earned the same composite scores but different subscores in different sections, and they found that similar composite scores don't reflect similar chances of college success -- their value depends on the subjects on which students scored well.

So why don't colleges use just parts of the ACT, or pay attention to the parts rather than the composite score? "The answer is not clear," the authors write. "Personal conversations suggest that most admission officers are simply unaware of the difference in predictive validity across the tests and have limited time and resources to analyze the predictive power of its various components at their institution. An alternative explanation is that schools have a strong incentive -- perhaps due to highly publicized external rankings such as those compiled by U.S. News & World Report, which incorporate students’ entrance exam scores -- to admit students with a high ACT composite score, even if this score turns out to be unhelpful."

Late Monday, the ACT released a statement on the study: "ACT has decades of research supporting the predictive validity and application of the four ACT subject test scores and the composite score in college enrollment, performance and retention. We were not aware of the study in question until this morning, and we are in the process of reviewing its methodology and findings."

Jon Erickson, interim president of ACT's Education Division, made several points via an e-mail. He noted that the ACT is used "for multiple goals and purposes beyond just admissions or predicting overall student success." For example, it is used in course placement, and he said that the ACT has been "quite accurate" in that function.

Further, he said that "all four subject areas are important in college," so the ACT appropriately includes them.

He also defended the use of composite scores. "We believe the composite score represents the best overall picture of the student and perhaps is most easily accessible and useable by institutions." In addition, he said that colleges are correct not to weight different parts of the ACT, adding that "prediction models are more reliable at the composite level than the individual scale score level."

Robert Schaeffer, public education director of the National Center for Fair and Open Testing, a group that questions the use of standardized testing, said that the NBER paper "makes an interesting, technical point about how to weight ACT subscores optimally," but he said that he wished the analysis had gone further.

The scholars should have compared the validity of the ACT scores to the validity of relying on high school grades in college preparatory courses, "as is done at an ever-growing number of institutions," he said. Schaeffer said he would have preferred an analysis that looked "at much more fundamental questions about how to use standardized exams."

Validity questions have also been raised about the SAT. In 2008, after the College Board adopted a series of major changes in the SAT, it conducted studies of whether the new test was any more accurate at predicting college success. The board found no difference in the predictive value, and a continuation of differing confidence levels for predicting the success of different racial and ethnic groups.

Next Story

Written By

More from Traditional-Age