You have /5 articles left.
Sign up for a free account or log in.

Is social science research trustworthy? The economist Noah Smith raises this question in a recent piece entitled “How many of our ‘facts’ about society health, and the economy are fake?”

As he points out, many “facts” uncovered by social science research have been called into question, including recent claims about the rise in maternal mortality, the fall in geographic mobility and the rise in teen suicide.

It turns out, for example, that the supposed surge in maternal death rates resulted from adoption of a new way of collecting the data. In fact, the maternal mortality has been falling, not rising.

As Smith observes: “When some researchers found that the supposed rise in U.S. maternal mortality was fake, a top OBGYN blasted them for revealing their results, because the truth could result in less money and attention for his field.”

Another claim, that Americans’ geographical mobility is declining precipitously, is likely false. Tax records indicate that annual rates of interstate migration may actually have increased over time.

How about the claim that teenagers’ mental health is deteriorating? There are reasons to think that changes in measurement may be responsible for the reported increase in unhappiness, suicide, suicide attempts suicidal ideation, reported loneliness, and diagnosed depression. This echoes the earlier discovery that the supposedly sharp increase in childhood obesity was partly an artifact of changes in the definition of body-mass index.

As Smith puts this: “We’re finding out that some of the social science ‘facts’ that we ‘knew’ in the 2010s were actually fiction. And even more alarmingly, we’re finding that some experts were fine with that.”

I should stress: Smith does not attribute this problem primarily to dishonesty, though there are, of course, a disturbing number of instances of academic fraud. The roots of the problem lies elsewhere: in a distorted system of incentives, in group think, in inadequacies within the system of peer review, and in the complexities of social behavioral. Unless we address these problems, credibility in social science research will suffer severely.

There are, certainly, a significant number of recent examples of irreproducible or disproven social science claims. These include the “power pose” research, which claimed that adopting high-power poses could increase testosterone, decrease cortisol and improve performance. Subsequent studies failed to replicate these findings, casting doubt on the robustness of the original results.

Then, there’s the theory of ego depletion, which posits that self-control is a limited resource that can be exhausted. This theory has faced significant scrutiny. Large-scale replication attempts have produced mixed results, suggesting that the effect may not be as significant as initially thought.

Perhaps the most visible public controversy surrounds the Stanford Prison Experiment, which concluded that situational factors heavily influence behavior. Not only has this experiment been criticized for methodological flaws and ethical concerns, but recent analyses and revelations about the extent of participant manipulation further undermined its credibility.

Two relatively recent social science claims that have had enormous impact on public thinking are also turning out to be more problematic than press coverage would suggest.

Claims about a sharp increase in deaths due to despair—resulting from suicide, drug overdoses and alcohol-related liver disease—have garnered a great deal of public attention and have shaped policy debates. However, these claims have also faced serious criticisms.

First of all, the term “deaths of despair” is somewhat ambiguous and lacks a precise definition, making it difficult to consistently categorize and measure these deaths across different studies and regions. There are also concerns about inconsistencies in how deaths of despair are measured and reported. Differences in data collection methods and reporting standards across states and over time can affect the reliability and comparability of the data​. Nor is the long-term trajectory of these deaths clear. Without reliable longitudinal studies, short-term fluctuations may well exaggerate their increasing prevalence.

The underlying cause of these deaths, too, is subject to much debate. What’s the relative weight of economic distress, such as job loss or economic inequality, or socio- and psychocultural and other social issues, including family instability, social isolation, fraying community ties and shifting societal attitudes toward substance use? Also, are these deaths concentrated in certain regions or among certain demographics or generations? Generalizing the findings to the entire country or to all demographic groups can be misleading​​.

Similar concerns have been raised about Robert Putnam’s claims in Bowling Alone, e.g., the decline in social capital, the eclipse of community, and the drift away from social and civic groups to a more individualistic, more privatized way of life. One objection is that Professor Putnam’s declensionist perspective is riddled with nostalgia—overidealizing the social cohesion of the mid-20th century, which was, in part, a reaction to highly specific circumstances, including the suffering wrought by the Great Depression and the upheavals of World War II—while neglecting that era’s inequalities and exclusions.

Also, Professor Putnam’s heavy reliance on quantitative data, such as surveys and statistics, might well overlook qualitative aspects of social capital, such as the quality and depth of social interactions,​ and underestimate the structural and cultural factors that shape behavior.

Then, too, traditional measures of social capital, such as membership in civic organizations, may not capture new forms of social engagement that have emerged, particularly those facilitated by digital technologies and social media​. Putnam’s critics contend that social capital may well have evolved rather than declined. Nor does his approach sufficiently consider whether trends in social capital vary across different demographic groups and socio-economic classes. By overlooking these variables, his conclusions may well be overstated.

It’s no surprise that social science findings might turn out to be less reliable than those in the natural sciences. After all, human behavior is highly complex and influenced by a multitude of factors, making it difficult to control for all variables in social science research. This complexity can lead to less consistent and replicable findings compared to natural sciences, where variables can often be more tightly controlled.

Also, social science research often relies on self-reported data, surveys and observational studies, which can introduce biases and errors. Natural sciences, in contrast, frequently use more controlled experimental methods.

Then, there’s publication bias, or the tendency to publish positive findings over null or negative results. This issue is prevalent in both social and natural sciences, but may have a more pronounced impact in social sciences due to smaller effect sizes and greater variability. This risk is compounded by media coverage that often oversimplifies or exaggerates findings to attract attention.

Social science research may also be more prone to political bias due to the nature of research topics, which often intersect with current political issues and public debates. This can lead to the framing of research questions and interpretation of results in ways that align with specific ideological perspectives.

Ironically, among the best ways to combat errors in social science research is to draw upon a key social science insight, the cognition biases: the perceptual distortions that can warp judgments, interpretations, and reasoning and result in inaccurate or subjective research results.

Common types of cognitive biases include:

  • Anchoring bias: The tendency to rely too heavily on the first piece of information encountered (the "anchor") when making decisions.
  • Availability heuristic: Overestimating the importance of information that is readily available, often because it is recent or emotionally charged.
  • Bandwagon effect: The tendency to adopt certain behaviors or beliefs because many others do the same.
  • Confirmation bias: The tendency to search for, interpret and remember information in a way that confirms one’s preconceptions.
  • Hindsight bias: The tendency to see events as having been predictable after they have already occurred.
  • Overconfidence bias: The tendency to overestimate one’s abilities, knowledge or predictions.
  • Publication bias: The tendency for journals to publish positive findings over null or negative results, leading to a skewed representation of research outcomes in the literature.
  • Selection bias: The bias introduced by the selection of individuals, groups or data for analysis in such a way that proper randomization is not achieved.
  • Self-serving bias: The tendency to attribute positive outcomes to one’s own actions and negative outcomes to external factors.
  • Social desirability bias: The tendency of respondents to answer questions in a manner that will be viewed favorably by others.
  • Sunk cost fallacy: The inclination to continue an endeavor once an investment in money, effort or time has been made, even if it no longer seems viable.
  • Survivorship bias: The logical error of concentrating on the people or things that “survived” some process and overlooking those that did not because of their lack of visibility.

To this list, I’d add a few others:

  1. Nostalgia for an Idealized or Stereotyped Past That Never Was: Romanticizing or idealizing a past era, ignoring the complexities and problems of that time.
  2. Politics and Policy Preferences: Allowing one’s political beliefs and policy preferences to shape one’s studies, consciously or unconsciously influencing hypotheses, methodologies and interpretations.
  3. The Academic Penchant for Novelty and Originality: This is the incentive to publish novel, sensational, groundbreaking and original findings.
  4. Incentives to Exaggerate: Overstating the significance or implications of findings to gain attention, funding or career advancement.
  5. The Quest for a Monocausal Explanation: The tendency toward reductionism, simplifying complex social phenomena by attributing them to a single cause, overlooking the various economic, psychological and social factors that contribute to a particular outcome.

To mitigate these biases, researchers can adopt several strategies:

  • Blind and double-blind studies, where neither participants nor researchers know the group assignments, can reduce biases related to expectations and treatment effects.
  • Diverse teams with varied perspectives can help identify and challenge biases.
  • Open science practices, including sharing data, methodologies and results transparently can facilitate scrutiny and independent verification of findings.
  • Pre-registration of studies, including hypotheses, methodologies and analysis plans, can help prevent selective reporting and confirmation bias.
  • Replication studies can validate findings and reduce the impact of anomalous results.

Only by acknowledging and addressing these biases, can social science research enhance its credibility and reliability.

Another way to combat errors in social-science research is to apply critical-thinking strategies that are equally applicable to humanities research.

One such strategy involves contextualization: Understanding the context in which data is collected and interpreted is crucial. Ask yourself, which factors—historical, cultural, social and economic—might influence the research subject and the research conclusions.

Another critical-thinking strategy is to ensure clear and precise definitions of key terms and concepts to avoid ambiguity. For example, in studies of poverty, a researcher must make clear whether they are referring to absolute poverty, relative poverty or a specific poverty threshold.

Evaluating the quality, source and reliability of data is yet another critical-thinking strategy. Specifically, this means to …

  • scrutinize the nature and sources of the evidence and the data collection methods for biases or inaccuracies;
  • ensure that measurement tools and instruments are valid and reliable; and
  • look closely at such factors as sample size, control groups, and randomization.

Also, be sure to distinguish between correlation and causation; be cautious about making broad generalizations; and recognize the limits of a study and the specific contexts in which the results apply.

Above all, strive for objectivity by considering multiple perspectives, acknowledging potential biases and being open to alternative explanations.

Let me conclude with a plea: Social scientists will have much more influence being right than by being on the right side.

We mustn’t confuse serious scholarly research with legal briefs, which serve a very different purpose and adhere to very different standards of evidence.

The purpose of scholarly research is to advance knowledge, explore theories, and contribute to academic discourse. Its goal is to provide in-depth analysis, critique or synthesis of a particular topic, with an aim of presenting new insights or perspectives. A legal brief, in contrast, seeks to persuade. It presents arguments and interpretations in ways designed to support a particular position. In academic terms, it’s thesis-driven, intended to convince its readers of the merits of its argument and the correctness of its position.

Serious scholarship employs rigorous research methods, including empirical data collection, qualitative analysis and theoretical exploration, and generally rests upon an extensive critical analysis of existing research. Its approach is objective, balanced and methodical, while recognizing that its conclusions are provisional and subject to revision. Scholars are expected to critically evaluate all evidence, acknowledge counterarguments and limitations of their work, and consider multiple perspectives. Explicit acknowledgment of potential biases and limitations is encouraged.

Let me encourage you, as scholars, to seek truth in a world of advocacy. Unlike a legal brief, which presents arguments in the most compelling way possible for the benefit of a client or cause, emphasizing favorable facts while downplaying unfavorable ones, the academic’s creed is to pursue truth, understanding and insight, wherever that leads.

Even as social scientists recognize that their scholarship has policy implications, remember that the basic goal is to advance understanding. That’s a higher aim than the more limited goal of advocacy. Factual accuracy and unbiased research is more important than allegiance to any group or cause.

Pursue insight, not influence.

Steven Mintz is professor of history at the University of Texas at Austin and the author, most recently, of The Learning-Centered University: Making College a More Developmental, Transformational, and Equitable Experience.

Next Story

Written By

Found In

More from Higher Ed Gamma