You have /5 articles left.
Sign up for a free account or log in.

On Monday, April 14, the American Association of University Professors released our annual report on faculty salaries. As anyone who has examined our report over the years knows, we provide literally thousands of pieces of data each year, from the specific (such as the average salary for a female associate professor at one specific institution among the 1,400 colleges and universities around the country that send us data) to the general (such as the average salary for all full professors nationally). Monday’s Inside Higher Ed article on the report raised a question regarding one of the most general figures. We reported that the increase in the overall average salary for a full-time faculty member for 2007-08 as compared to 2006-07 was 3.8 percent, and noted that this increase was less than the annual rate of inflation (4.1 percent). The controversy -- if it can be called that -- emerged when IHE asked us how the change in average salary for each of four reported ranks could be higher than the change in the overall average. We provided a detailed response, which IHE forwarded to the American Council on Education for comment. Although generally very complimentary about our annual report, ACE pronounced this particular conjunction of statistics “a curious result that stems from a flawed methodology.” Since in the world of quantitative analysis that’s a serious charge, I appreciate the opportunity to answer it.

In fact, our number is reliable and our methodology is not flawed. I’ll explain briefly why that is so (and post more detail here.) But I also think it’s important to explain the difference between the two basic measures of faculty salary we report. For the calculation in question, we include only institutions that supplied data last year as well as this year. We calculate the average salary for each year and for each rank, and then the change between the averages. The percent change figures for each rank are independent of each other; the overall change is not simply the average of the figures for each rank. It reflects the fact that, although the set of institutions is held constant, the faculty mix changes. The number of full professors or assistant professors is not constant from year to year, and the proportions in each rank change. We found that the distribution of faculty across ranks shifted toward the lower-salaried ranks from one year to the next, which is one reason why the change in the overall average was less than the changes in averages for specific ranks. However, I think the fundamental issue here is not about statistical methodology, it’s about which questions are being asked and how best to answer them.

In our annual report, we provide answers to two basic questions. (That’s an oversimplification, but useful for understanding the present issue.) One question is “What was the change from last year in the average salary paid to full-time faculty members?” (That’s the 3.8 percent number.) The other question is “What was the average change in salary received by full-time faculty members who continued at their institution from the previous year?” (That number, overall, turns out to be 5.1 percent.) It may seem to some readers that the difference between the two question is “just semantics,” but that’s not the case at all. Answering the two questions requires collecting different kinds of data, and provides different information for different audiences.

The controversy is over answers we provided to the first question, the change in average salary. Essentially, this is a question about cost to the institution: On average, across different types of institutions, what is the salary paid to a faculty member and how has that changed from last year? (We also provide the answer for specific categories of institutions, and for individual ranks.) The answer is important because it contributes to the ongoing discussion on how colleges and universities spend their money and what type of educational experience students have as a result. That was, in fact, the theme for this year’s report: “Where Are The Priorities?” Based on the evidence we assembled this year, spending priorities for higher education institutions are shifting away from employing tenured and tenure-track faculty, and we think that indicates a fundamental shift in the nature of higher education. The rapidly increasing amounts spent on salaries for presidents or football coaches may still be mostly symbolic when measured against a college or university’s entire budget. But when institutions simultaneously argue that there is no money for faculty raises, or meet increased enrollment by hiring more part-time or temporary faculty and requiring more teaching from graduate students and postdocs, the symbolism indicates priorities that are out of line with the core mission of teaching and research.

The answer to the second basic question in our report provides a different kind of information for a different audience. It’s for the individual faculty member who wants to be able to judge how he or she fared on compensation when compared to faculty members at other institutions. For this purpose, we collect and publish data on the “continuing faculty increase.” This is a unique feature of the AAUP report, not available anywhere else, and it requires us to collect different data from the institutions. The “continuing faculty increase” tells the faculty member who remained employed at the same institution what kind of raises similarly situated faculty members received for this year. (The figures do include the effect of both salary increases and promotions.) We provide this information by faculty rank, for individual institutions as well as in aggregate form, although not all institutions supply these data. And of course, we also provide average salaries by rank in our institutional appendices, so that individual faculty members can compare what they earn with what faculty at other specific colleges and universities earn. Our data are not perfect, they do not show every nuance or answer every specific question. (Many readers may not realize that we do not receive individual-level data on faculty members, but rather aggregate figures by rank and gender for each institution.) But they do provide answers to numerous questions about faculty compensation and where that fits in the spending priorities of the institutions.

Now how does this relate to the controversy over this year’s report, and the question of “flawed methodology”? I believe that the issue arose because commentators were trying to answer the second question with figures designed to answer the first. We reported (in the condensed formulation of our press release) “Overall average salaries for full-time faculty rose 3.8 percent this year, the same as the increase reported last year. But with inflation at 4.1 percent for the year, the purchasing power of faculty salaries has declined for the third time in four years.” This is a statement about the change in the overall salary level; the comparison to the inflation rate indicates that faculty salaries are not rising as fast as other cost factors in higher education, a subject on which we have commented regularly in our annual reports. Unfortunately, it seems that some readers mistook this statement to mean that the average faculty member received a 3.8 percent raise this year, which is not the case. From what I have seen of the specific criticism leveled by ACE officials, their attempt to make our overall percent change figure “weighted” grew out of this mistaken interpretation, since their argument was that the average for any given faculty member should reflect the range of the averages for the various ranks. In the process, they calculated an “average of averages” that produced a higher figure, but which is not appropriate for this analysis.

As Saranna Thornton suggested in the Inside Higher Ed article, this apparent methodological dispute actually points to a legitimate (and important) research question. (Terrific! A topic for next year’s report already!) The result for this year, as in previous years, reflects a change in the overall composition of the faculty. The shift for this one-year period is relatively small, but is reflective of a national trend we’ve been tracking for years: the increasing use of non-tenure-track appointments, even within the full-time faculty. We may also be seeing the ongoing consequence of the wave of faculty retirements we’ve been expecting for quite a while now, as more junior colleagues replace those who retire. It would be interesting to see how this has affected salaries over a longer time period.

As one IHE commenter pointed out, the compensation data we collect each year do not include pay rates for part-time faculty. As we have documented on numerous occasions, part-time faculty members receive wages that are not even close to proportional to those of their full-time counterparts, and the number of part-time faculty members continues to grow. Unfortunately, the collection of comprehensive data on part-time faculty pay would require a different survey process, since many institutions do not have accurate centralized records on part-time faculty pay. Even so, as part of our annual economic status report, we have analyzed available data on part-time faculty pay from the U.S. Department of Education’s National Study of Postsecondary Faculty (see our 2006 report). We also released separately the Contingent Faculty Index 2006, which provided the first-ever listing of counts of full-time and part-time contingent faculty (and graduate student employees) for individual institutions, by name. We hope to update that report later this year. Also in 2006, we released an updated set of our Recommended Institutional Regulations, including one that specifically calls for due process protections for part-time faculty in the hiring and renewal process. (It’s Number 13, available on our Web site.) And our 2003 policy statement “Contingent Appointments and the Academic Profession” builds on three decades of policy work in this area. I really do hope that we can find a way to collect and publish useful data on part-time faculty pay, since there is no more central issue in higher education today than the consequences of the increasing use of contingent faculty appointments. The AAUP has responded forcefully on this issue and will continue to do so.

Another IHE commenter argued that because AAUP is an advocacy organization, our research must be biased. Yes, we are an advocacy organization. We advocate for academic freedom as essential to the functioning of higher education in a democratic society. To ensure academic freedom, we advocate for tenure and “a sufficient degree of economic security to make the profession attractive to men and women of ability.” (The words are from 1940, but the principle remains vital today.) We advocate on the basis of principles, and we advocate for the profession as a whole. Yet the AAUP has recognized for decades the importance of collecting and presenting data objectively, since only in that way can we be assured that the arguments we advance rest on a sound empirical foundation. Our approach has been to collect data on the basis of specific, consistent criteria, and to publish them in sufficient detail to allow our members and all those interested in higher education to draw their own conclusions. In our annual reports, we provide context by describing aspects of the broader economic situation, of which faculty salaries are only one part. We do not create rankings and we do not make lists of “the best” institutions. We pretty much let the data speak for themselves.

We welcome questions, and we try to provide answers. And we look forward to continuing the urgent conversation about the fundamental transformation of the academic profession -- and what we can do about it.

Next Story

More from Views