You have /5 articles left.
Sign up for a free account or log in.

While I spend much of my time on administration, my scholarly field of public opinion shifts so radically, and in such interesting ways, that it still holds my rapt attention. A new generation of opinion scholars are asking the big questions about public discourse and democratic representation, and their having grown up with the Internet gives them the sensibility to understand political communication in a new age. One thing they rarely address, however, is how changes in American public opinion measurement and expression might inform analysis of public opinion on their own campuses. Does the evolving nature of public opinion, and our academic approach to it, have anything to do with the assessment of campus or student opinion? To my mind, much of what public opinion scholars have learned has direct implications for campus citizens and leaders.

Public opinion, as a concept, was born in the ancient Greek democracies, with an acceleration of interest in the heady days of the Enlightenment. While of course citizens always had opinions, recognition that those opinions might matter somehow in affairs of state became vital with the French and American revolutions on the horizon. In the 18th century, particularly in France, an abundance of tools for expressing and assessing public opinion -- political tracts, oratory, theatrical productions, legal cases, interpersonal networks for gossip – were recognized by kings and then by our founding fathers. All of these mechanisms were useful in measuring something, although it was in the eye of the beholder as to what the broader, underlying sentiment might be. History tells us, eventually, who was right about public opinion in a place and time, who was wrong, and where it didn’t matter at all. But in any given moment, political actors and leaders do their best, since their lives and agendas depend on it.

With the rise of positivism, the penny press, and competitive general elections in the American 19th century, straw polls began to appear with great regularity in newspapers and political speeches. The quantification of public opinion was a natural outcome of a growing scientific culture, although early polls were largely unsystematic and infused with partisanship. From the 1830s until the 1930s, newspapers would often publish what I have called "people’s polls," where a quilting bee, union, or fraternity would poll itself and send the results to papers like the Chicago Tribune or The New York Times. Journalists were productive as well, polling people in taverns and train cars as they traveled a vast new nation, often asking for a simple show of hands, “How many for Lincoln and how many for Douglas?” (Interestingly, women were included in published straw polls, even though they could not vote in presidential elections. I surmise that they were included because women were, as they still are, the primary consumers in a household; newspapers need women to read the paper so they see product advertising).

The 20th century advent of polling and survey research is well-studied, and is a rocky but largely linear progression to increasing dominance of quantitative polling. There were some embarrassments to pollsters – the presidential mis-predictions of Landon over Roosevelt in 1936 and then Dewey over Truman in 1948. But the survey industry – so closely tied to market research – became a wildly successful one. So successful, in fact, that journalists and policymakers saw polling as the primary source for "scientific" opinion measurement. Public opinion became synonymous with polls, and journalists in particular bandied around poll results with an authority that made (and still makes) most social scientists cringe.

But things have changed, and abruptly so, over the past few years. While many social scientists are reluctant to give up the survey as a means for understanding public opinion, it is coming to the point where the survey looks like a bizarre dinosaur. We have new, wonderful means of understanding public opinion, far more nuanced ways than a poll could ever capture. People write blogs, tweet, post comments to news sites, produce videos for YouTube, and use an incredibly diverse and complex set of communication tools to get their opinions known. Are they scientific? No, but neither were most polls. Surveys were always reliant on anonymity and question framing. If people told a pollster something in private, were they ever willing to act on that opinion? How much did they care? Was that an opinion of a moment or a closely held belief?

Thousands of researchers and articles explore these topics in the general field of public opinion research, and we will continue to cling to polls to answer some questions. But to believe that they are better or superior to the wealth of complex public opinion expression we now have, whenever we turn on a computer, is to be blind to our communication universe. We may not be able to measure with great accuracy the influence of Daily Kos or Fox News on public opinion or vice versa. But the days of relying on a poll of a thousand random people, and their responses to a short set of multiple choice questions, for anything particularly useful, have slipped away.

What does this profound change in the technological and cultural environments for public opinion expression and measurement mean for campuses? First, we should probably stop leaning on surveys of students, faculty or staff. I have designed many such surveys, on many subjects, from faculty exit interviews to student opinion polls about civility and tolerance. These have ranged from small, specific surveys to large random ones, but in any case, they seem primitive to me at this point. They are less costly than ever, with our tools like Survey Monkey, and low computing costs. But that doesn’t mean they are worthwhile.

They constrain opinion to what you ask, and open-ended queries tend to be difficult to work with. If I take a survey of women faculty about gender discrimination on campus, and the results turn up just a few comments – in the open-ended query sections – with some heart-wrenching stories of harassment or discrimination, what have I found? I have been in this position more times than I can count, where quantitative results reveal no systematic problems, but the qualitative data hint at something very bad.

This doesn’t mean that you can dismiss all surveying; just keep doing the ones where you can actually triangulate with contextual data. For example, while those who have an ax to grind will use end of semester teacher evaluation surveys as a way to trash a professor unfairly, these are still important surveys. We just need to balance them with our other evaluation techniques – observation, review of syllabuses and assignments, learning outcomes, and the like. Of all these techniques, I find student surveys the least instructive, but there are sometimes valuable data.

If not surveys, of students, faculty and staff, then what? How should we navigate the flood of opinion on any issue, and locate “public opinion” or at least the general popular sentiment?

Introduce more sophisticated notions of "data." I wish we could outlaw the bizarre and worthless phrase “data-driven”: it has actually become dangerous, as it values quantitative data (no matter how bad) over other forms of data. And, as any sociology major knows, all data are social constructions in any case, formed by the biases of collectors and questions asked. “Data-driven” is an odd legitimation of all things quantitative, when we all know darned well that some data are better than others. I have seen tremendous damage done with the broadcast of lousy data, just because it was numbers, and some soul bothered to collect it. In fact, bad data are rampant on campuses, because a lot of people are in a hurry.

In our severe budget-cutting of late, the most criminal is the reduction of institutional research offices, where you actually find the people who can discern good/useful campus data from dubious, incomplete data. I hope when the economy improves, these offices can be re-staffed. But in the meantime, beware survey numbers and other super easy quantitative measures of public opinion (e.g., number of hits on a website or visits to the library). If you believe in proper, meaningful data, then figure out how not to lose all the terrific data all around you: alumni letters, faculty/staff meeting conversation, insights from your campus police, depositions in lawsuits, the nature of the departures of key faculty, attendance at events, types of events being held, etc. These data, messy as they are, are often far richer in useful information than typical systematic quantitative data.

Newspaper coverage is not synonymous with public opinion. One of the most difficult psychological challenges for people who love a campus is negativity of their local press. It did not start with the economic downturn, but is worsened by it, as universities look like wealthy enclaves relative to the broader community. You may have faculty departing, staff reductions in critical areas, programs cut, and scary deferred maintenance, but a campus still looks like a happy park to most people. And it is the case that colleges are upbeat places, no matter the economy, because young people lift the spirit of any organization. But even if morale on your campus is high, and people are going about the business of teaching, learning, and research, inevitably there are cynical or angry people who find local journalists/editorialists and vice versa.

It is a way of life in public higher education, and we need to do a better job of separating the reality of campus opinion from the partial view of some media covering us. Bad news, or isolated problems on a large campus, may sell traditional papers – a function of the challenge of journalism more generally, as the profession searches for a new paradigm. Until American journalists figure out how to both improve their business model (which could be decades, should they even survive), and hire full-time higher education reporters, take the measurement of public opinion into your own hands.

The Internet, on the other hand, is likely closer to meaningful public opinion. Even if it is overwhelming in complexity, and less systematic than results of a survey, the opinions found – anonymous and attributed – on the Internet are more interesting and important. They are, in most cases, more organic than anything you could collect with a survey. Your campus is portrayed in wonderful ways – sometimes orchestrated by your PR department, sometimes not – as your students and faculty do amazing things. They discover, create artwork, put on performances, give compelling lectures. But there are some less benign links going viral or at least garnering a lot of attention as well, and monitoring these is a way to learn who is unhappy and if that has any validity. For example, I highly recommend putting your college's name into YouTube every few days to see what is being communicated and why. In the best case, you learn about fabulous people and initiatives, or discover concerns about the community. But whatever is out there, you should probably have a look. It’s an expensive proposition for your vice presidents, deans, and senior faculty, but a fine task for student assistants, who know how to find things better than we do in any case.

Focus Groups. In my field, as opposed to market research, we largely dismiss focus groups. Political scientists make very little use of any tool like this, since the folks in the group are not chosen at random, are paid, and are seen as too small for generalization. On a campus, however, your "population" is far smaller than the U.S. population, so conducting a decent number of focus groups can be very instructive and results can in fact represent huge swaths of your campus. You collect textured, rich data as compared to numbers that are hard to parse (I personally have never heard data "speak"!). Instead of yet more committee work, ask one of your experienced marketing professors to train a few other faculty, trusted staff, and strong students to run focus groups, and have them "on call." This way, when you have an issue, you have a group of excellent facilitators who can be asked to lead some discussions and give the feedback to you. I find that most faculty, staff, and students enjoy this, and will be happy to help, as long as it does not get too arduous. Just train enough people that you can spread the workload around.

***

Public opinion expression and assessment are among the most glorious and the most challenging aspects of democratic practice, whether in America or anywhere else. The "Arab Spring," for example, an incredible urge toward self-governance and participatory citizenship, has been so moving. But as the inspiration and emotion fade, new democracies are faced with the same challenge we have on any campus: What do people think, how strongly, and why? These measurement issues do not defy social science. Yet if we fail to call upon all of the tools and talented people we have in our midst, we’ll never get a handle on the real opinions of our colleagues or our students.

Next Story

More from Views