You have /5 articles left.
Sign up for a free account or log in.

A new study suggests that grade inflation is on the rise in American high schools, and the news media has grabbed hold of the story. The same breed of people who think third graders getting participation trophies is a genuine problem are latching on to the report as more evidence of the decline of American education with headlines like “Everyone Is Special” and “Is Our Children Learning?

There are, however, good reasons to question the study written by Michael Hurwitz, a researcher at the College Board, and Jason Lee, a graduate student at the University of Georgia. Worse yet, some of the blame for the grade inflation they’re calling out should probably be placed on the College Board itself.

The study says that the average high school GPA has risen 0.11 points since 1998, and more than twice that at private, nonreligious schools. The percentage of seniors claiming to have an A average has also risen, from 39 percent to 47 percent. During the same time period, the average SAT score has dropped 24 points.

Those findings could suggest that colleges might not be able to rely on high school grades to distinguish between students or predict future performance as well as they have in the past -- not if almost half the students have A averages.

You don’t have to be an A student to guess the College Board’s solution to grade inflation.

The fact that the College Board stands to benefit from these findings is not reason enough to dismiss the claims. While one critic of standardized testing likened the findings to “the tobacco institute doing research on healthy lifestyles,” we need not engage in conspiracy theories in order to raise questions about the findings. In a story on the study, Hurwitz and Lee claimed the numbers speak for themselves, and a conversation with Hurwitz gave no reason to question how he and Lee carried out their work on the data. The problem is the data.

Hurwitz and Lee used two resources to track grades over the past two decades: high school transcripts collected by the U.S. Department of Education and surveys filled out by students (or their parents) during SAT registration.

The federal data, from the Secondary Longitudinal Studies Program, comprise around 15,000 transcripts from 2004 and 23,000 from 2013. Given the more than three million high school students who graduate each year, the data sets are not large, but a good sample could mitigate that concern. Unfortunately, the authors of the study that included 2013 transcripts themselves caution that “the representativeness of the school sample is lost after the base year (2009) as students disperse and some schools close or merge and new schools open.”

Luckily, millions more students are in College Board’s database of self-reported grades and GPAs. This sample may not be as representative as the government’s, however, since it only collects information from students who take the SAT. According to the online portal created by the College Board for high school counselors, 48 percent of the Class of 2017 took the SAT. That means almost two million students did not take the SAT and are left out of the study (many of them probably took the ACT instead). Add to that the regional nature of SAT versus ACT taking and the connections between wealth and race and taking college entrance exams, and you get a report that cannot claim to describe all American high schools or students.

Still, among those students the College Board surveyed, the number of A’s and the average GPA rose, even as SAT scores dropped, and both the change and the discrepancy need to be accounted for.

With respect to the discrepancy, it is strange that the Hurwitz and Lee use the SAT as a benchmark, given that the College Board dramatically overhauled the test in 2016 in order, as CEO David Coleman insisted, to make the test mirror the actual work students do in high school and beyond. (Scores from the new test were left out of Hurwitz and Lee’s study.) If the College Board itself acknowledged that the old test was inadequately aligned with schoolwork, why put so much trust in its ability to evaluate the grades received for that work? Would we confirm NCAA football rankings by having the teams play each other in basketball?

What about the 24-point drop in the average SAT score? To start with, it is relatively small. A 30-point drop on the old version of the exam would be equivalent to getting three to five more questions, out of the 121 math and reading questions, wrong. More significant, the growth in the number of test takers likely drove much of the decline in the average. Between 1998 and 2015, the number of SAT test takers grew by 35 percent. Much of that growth was likely among low-income and first-generation students who traditionally had not taken the exam but have increasingly done so, in part thanks to the efforts of Coleman and the College Board to expand opportunity. The growth in the number of the people finishing high school in the past two decades also means more students at the academic margin could be taking the exam. Ironically, the drop in the SAT average is a good sign -- it means more students are graduating from high school and thinking about college.

You might expect more students at the margin taking the SAT to contribute lower grades and GPAs to the College Board data, but if they are, they are not counterbalancing the higher grades. That assumes, however, that the self-reported grades and GPA are accurate. Here, again, there are concerns.

When students or their parents register for the SAT, they are asked to provide additional information about themselves. It’s optional, but it doesn’t sound like it. They're told that creating a profile is part of the registration and can help them connect to colleges and find financial aid options, that the communications they receive from colleges or scholarship organizations will be based on what they provide. According to Zach Goldberg, senior director of media relations at the College Board, about 10 percent of students do not report their grades or GPA. In the survey, students are asked to select their grade point average from a drop-down menu that lets them choose a GPA based on letter grade and a hundred-point scale (97 to 100 equals A-plus, 93 to 96 equals A, etc.) and then to “select your average course grade” throughout high school. The second scale excludes minus and plus grades, and the letter grades are designated in 10-point increments, so that a B-minus is worth exactly the same as a B-plus.

It is not hard to imagine that many of those who are less confident in their grades will skip the survey, nor is it hard to imagine students or parents, unclear both on what their grades are or what happens with what they report, reporting their grades “aspirationally” to get more attention from those colleges and scholarship organizations they are told they’ll be connecting to.

The College Board has no way to confirm the accuracy of the self-reported grades and GPAs in its survey, which means that the data always exist under a cloud of uncertainty. Aspirational score reporting and uncertainty about grades should be conditions that held throughout the time period, however, and so shouldn’t lead to the gradual observed rise in grades -- unless people became less honest or accurate in the past 20 years.

The thing is that grade reporting might in fact have become less accurate in the past two decades, because grades have become fuzzier, thanks in large part to the College Board’s AP program.

GPAs are conventionally calculated on a 4.0 scale (4.0 equals A, 3.7 equals A-minus, 3.3 equals B-plus, etc.). Since the 1960s, however, some high schools have given honors and AP course grades an extra bump through weighting, so that a B in AP psychology would be calculated as a 4.0 rather than a 3.0. One study has suggested, because practices are inconsistent across the country, that weighted GPAs are less predictive of college success than traditional GPAs. Many colleges unweight grades when they consider applications.

Weighting AP grades isn’t new. The practice goes back to the 1960s. What has changed is that the number of students enrolled in Advanced Placement classes has exploded in the last two decades. In 1998, the year with which Hurwitz and Lee began, more than 600,000 students took at least one AP exam. In 2015, around 2.6 million students did. With four times as many students getting weighted grades, is it any wonder that the average GPA has risen? This might not be the gradual change that has driven GPAs and A’s up, but it is a gradual change that could have the power to do so, especially among a population of students limited to those who take the SAT. According to the study, both the weighted and the unweighted GPAs have increased over time.

Weighted grades could create a false inflation effect because the College Board student questionnaire does not provide clear direction on how to report weighted grades. It is quite possible, even likely, that a significant number of students who received a weighted B in an AP class might report that grade as an A or use their weighted GPA in the College Board survey, so that even though they have an unweighted B average, they will report it as an A.

What this means is that, like those pharmaceutical companies selling laxatives to people addicted to the drugs they make, the College Board is holding out the SAT as an answer to a problem it helped create.

And then there is the question whether grade inflation matters, even if it is real. Elaine Allensworth, the Lewis-Sebring Director of the University of Chicago Consortium on School Research, said via email, “Grades can change without changes in test scores because they don't measure the same thing. Grades are much more comprehensive measures of achievement.” A 2013 paper on high school grade inflation suggested that, while average GPA has indeed risen in high schools, its “signaling power” to indicate academic achievement has not been diminished over the past two decades; otherwise we should have seen a decline in college completion rates among students with high GPAs. And yet highly selective colleges continue to admit and graduate, at very high rates, students from wealthy private schools with “inflated” GPAs. Additionally, plenty of evidence shows that high school GPA alone is a strong indicator of first-year college GPA.

None of this is to say that grade inflation is a fiction. If you speak to admissions officers at highly selective institutions, they are likely to tell you that they believe grade inflation is real, but those are the same admissions officers who, in concert with their administrations, have pushed admission standards higher and higher each year. With acceptance rates below 10 percent for most of the Ivy League, is it any wonder that the people applying there have high GPAs in high school and go on to have them in college? At elite private schools, which, unlike public schools, can pick their classes and which, like elite colleges, have grown more selective over the past 20 years, it is not a surprise that their students are getting more A’s.

Until we have data we can trust, the question of grade inflation remains precisely that -- a question. And if the College Board wants to do something about it, they should look inward first.

Next Story

More from Views