You have /5 articles left.
Sign up for a free account or log in.

Top Ph.D. students from the highest-ranked economics departments tend to be extremely productive researchers six years out of their programs. The rest of their cohorts? Not so much. Those are the findings of a report published in the Journal of Economic Perspectives.

“Only a small percentage of economics Ph.D.s manage to produce a creditable number of publications by their sixth year after graduation,” say John P. Conley, professor of economics at Vanderbilt University, and Ali Sina Önder, a lecturer at the University of Bayreuth in Germany and an affiliated researcher at Sweden’s Uppsala Center for Fiscal Studies, in their article, “The Research of Productivity of New Ph.D.s in Economics: The Surprisingly High Non-Success of the Successful.”

“Even at the top five departments, it would be hard to argue that the bottom half of their students are successful in terms of academic research,” they say. “If the objective of graduate training in top-ranked departments is to produce successful research economists, then these graduate programs are largely failing.”

Among the top 10 economics departments, 60 percent of graduates fail to publish even 0.1 highest-level journal papers, roughly equivalent to one paper in a second-tier journal, within six years of graduation, according to the study. Some 70 percent of Ph.D. graduates in the top 30 departments fail to achieve that standard.

“This record would not be enough to count as ‘research-active’ in most departments, much less to result in tenure,” the authors say. “Even from the high-ranked departments, very few graduates prove to be stars.”

Conley and Önder base their conclusion on American Economic Association data for 14,299 economics Ph.D. recipients from 154 institutions in the U.S. and Canada, between 1986 and 2000. They cross-referenced that information with the EconLit database of 368,672 papers published between 1985 and 2006 (adding six years to this second period to reflect the approximate time it would take Ph.D.s, who presumably are working as professors or in other research capacities, to earn tenure), in more than 1,000 peer-reviewed journals.

Next, the authors took the top 30 economics departments, based on an existing scholarly productivity ranking, and combined all their graduates from 1986 to 2000 into a single sample, to look at total research productivity by the end of the sixth year after graduation. They did the same for graduates of all other departments as one combined group.

Because merely quantitative measures of productivity don’t reflect the quality of scholarship, Conley and Önder used existing journal quality indexes to convert each “raw” publication number into an “equivalent” number of publications in the highly ranked American Economic Review journal. For example, they say, 1.5 articles in the Journal of Political Economy, or two papers in The Review of Economic Studies -- all the way down to six to 10 papers in “high quality field journals” – equal one Review paper. They used a similar equivalency system for co-authored papers.

In the end, they say, research productivity of Ph.D. graduates “drops off” rapidly, regardless of department rank, as class rank worsens. At Harvard University, for example, a student had to be in at least the 85th percentile of his or her class to publish even one Review-level journal article in six years. The median Harvard graduate publishes 0.04 highest-ranking journal papers. Going down in department rankings, a 95th-percentile graduate of a non-top-30 department had a stronger publication record than a 70th-percentile graduate of Harvard or several other top-tier programs.

The authors say their  earlier research shows that research productivity in economics is “highly concentrated,” in that the top 1 percent of publishing research economists produce 13 percent of all research, adjusted for quality; the top 20 percent of economists produce 80 percent of it. But what’s most interesting about this recent analysis, they say, is that that appears to be true even within departments. So even though the top five or 10  programs “have their pick of applicants each year, they still produce only a few winners in the research game,” Conley and Önder conclude.

Over all, 80 percent of economics Ph.D.s publish about 0.2 highest-level-journal-equivalent papers within six years of graduation, and about 90 percent of Ph.D.s do not even publish half a paper.

Comparing departments by tier, the authors say that 20 percent of Ph.D.s from the top-tier (top 5) departments have at least one top-level paper six years out. By the third tier, though (the top 11-20 departments), the ratio drops to 10 percent. It drops to 1-2 percent for departments outside the top 20. And about 40 to 60 percent of Ph.D.s in each tier do not have any Review-level publications.

As far as “superstars” go, the authors say, just the top one or two graduates from the Massachusetts Institute of Technology or Harvard will typically meet the highest standard of 2.5 or more American Economic Review-level papers. Just the No. 1 graduates from other top programs will meet that standard each year, or every other year, the authors say.

Conley and Önder say that it’s possible for a researcher to become much more productive later in his or her career. But they say the data offer important lessons nonetheless.

For graduate students, they say, the message is that “becoming a successful research economist is difficult.” They advise those considering Ph.D. programs in economics to have a “Plan B for every stage of the journey.” The good news, though, is that going to a top department isn’t necessary to become a top researcher. Conley and Önder also say alternative academic careers are fulfilling, and students discover “they actually prefer these sort of jobs to the academic life.”

For current academics, especially those sitting on graduate admissions committees, the data pose important questions, the paper says.  If Ph.D.s from competitive programs “did all the right things” before and during graduate school (or at least enough to graduate), why do they have “such unimpressive careers as researchers?” the authors ask. “Are we failing the students or are they failing us?”

Conley and Önder discuss several possible answers, such as that grades and tests aren't good predictors of ultimate research ability. Another theory is the “virtuous circle” in professional success: that if a new graduate gets a good job with good mentoring along the way, success will likely follow.

A third possibility – one which the authors say is supported by the precipitous drop-off in publication rates among non-top students – is that student and professors are playing a “positional game” within departments. In this scenario, the papers says, “faculty will attempt to identify the top students in an entering class, give them more time and attention, and suggest better projects to them. In turn, the students identified in this way work harder to maintain their position.”

In any case, Conley and Önder say the data suggest that hiring committees should look outside top departments for new faculty, but only at top graduates of lower-ranked programs. And, they say, there appears to be “substantial room to improve either our profession’s mechanism for selecting who enters Ph.D. programs in economics, or our method of training economics Ph.D.s, or both.”

In an email interview, Önder said he’d gotten a variety of reactions to the paper since it was first published this summer (it was given new life this month by coverage in The Economist, under the title “Lazy Graduate Students?”). But two responses were most common. First, “I heard from a lot of people that they have been expecting this kind of results,” he said. “That is, our paper seems to provide an empirical proof for what people have been suspecting based on anecdotal evidence so far.”

A second, more critical reaction is that the paper “only focus on academic publications, but there are a lot of Ph.D.s that take careers where they are not asked to publish and yet perform very high skill intensive tasks (and earn far much more than academics!),” he said. “In that respect, some people argue that we could have alternative measures for ‘success’ after Ph.D.

Önder said he agreed with that point, but saw academic publications as providing “the most straightforward measure” of success, and said he believed the results would hold up across a variety of disciplines.

“I believe that Ph.D. education is almost exclusively targeted at training researchers,” he added. “Since Ph.D. programs are run with this motivation, I think it is only fair to measure success of a Ph.D. program (not of graduates, but of the institution that trains those Ph.D.s!) by how many of its graduates turn into scholars.”

Önder also highlighted the finding that hiring committees would do well to look beyond the top-tier programs for new faculty hires.

“Many places, including European institutions hiring form the U.S. market, are interested in getting Ph.D.s from top departments, because they probably have ‘marketing’ purposes, or they just don’t know how the game works,” he said. “We show that a good hiring strategy (if it is really the academic productivity that employers care about) is to hire the top of the class from any department -- not anyone from a top department.”

Karen Kelsky, a former academic who advises graduate students and Ph.D.s on the academic job market and runs the blog The Professor Is In, said a major fault of the study is that it doesn’t consider the poor academic job market as a reason why some Ph.D.s never publish.

“Basically this seems to be a function of Ph.D. employment," she said. "First off, in a world of 75 percent adjunctification, Ph.D.s are not getting the elite tenure-track positions that permit them the time and funding to do peer-reviewed publishing at a high rate of speed. And then many economics Ph.D.s go into industry, where they also are less likely to publish research. In either case, to judge productivity based on peer-reviewed publishing without reference to employment tracks makes no sense."

David Autor, associate head of MIT's economics department and editor of Journal of Economic Perspectives, in which the study first appeared, also said the research didn't take into account the many economics Ph.D.s who willingly seek work outside academe -- in consulting, government or elsewhere (despite, in many instances, their advisers' attempts to dissuade them, he added). Still, he said, there seemed something familiar and intriguing in the primary finding of the study: that it's nearly impossible to tell which graduate students will become truly successful researchers. 

"The knack for research is really different than the knack for being a really good student," Autor said. "One is to a large extent about following directions -- you're given assignments and you have to manage your time well. The other is about developing original research questions and finding a way to answer them, and then packaging and presenting it all in a way that is of interest to others." 

Even though MIT admits only the most stellar applicants each year, Autor said, they're all still relatively "homogenous," and only time tells how they'll develop as researchers. It's kind of like Hollywood, he added, in the sense that the "only way to know if somebody's going to be a movie star is to do a movie with them."

Önder said he understood concerns that comparing research output of Ph.D.s in different jobs is like comparing “apples and bananas.” But, he said, “there’s probably no way to provide a perfect control for that anyways.” He said focusing on the top 20 percentile of students across all departments might provide a subsample of Ph.D.s “that have more or less comparable academic jobs.”

Number of American Economic Review-Equivalent Publications of Graduating Cohorts from 1986 to 2000, Six Years After Earning Ph.D.

Top 10

Departments/

Graduates'

Class Rank

(Percentile) 

99th

95th

90th

85th

80th

75th

70th

 60th

 Harvard

4.31

2.36

1.47

1.04

0.71

0.41

0.30

 0.12

 U. Chicago

 2.88 

 1.71 

 1.04 

 0.72 

 0.51 

 0.33 

 0.19 

 0.06 

 U. Penn

3.17

1.52

1.01

0.60

0.40

0.27

0.22

 0.06

 Stanford

3.43

1.58

1.02

0.67

0.50

0.33

0.23

 0.08

 MIT

4.73

2.87

1.66

1.24

0.83

0.64

0.48

 0.20

 UC

 Berkeley

2.37

1.08

0.55

0.35

0.20

0.13

0.08

 0.04

 Northwestern

2.96

1.92

1.15

0.93

0.61

0.47

0.30

 0.14

 Yale

3.78 

2.15

1.22 

0.83 

0.57 

0.39 

0.19 

 0.08 

 U. Michigan

 (Ann Arbor)

1.85

0.77

0.8

0.29

0.17

0.09

0.05

 0.02

 Columbia

2.90

1.15

0.62

0.34

0.17

0.10

0.06

 0.01

 Non-top-30

 Departments

(for 

comparison)

1.05

0.31

0.12

0.06

0.04

0.02

0.01

 0

                                               

Next Story

More from Faculty Issues