You have /5 articles left.
Sign up for a free account or log in.

Faculty members tend to be skeptical about attempts to go beyond grading with standardized definitions and measures of what students should learn -- the so-called student learning outcomes accreditors require colleges to collect.

The wariness of professors is often well founded, the authors of the influential book Academically Adrift argue in a new book, because faculty members often haven’t been at the table when these measures and related assessments are being developed.

“There’s good reason for a lot of the skepticism and discontent,” said Richard Arum, a professor of sociology and education at New York University and director of the Social Science Research Council’s Education Research Program. He said many faculty members view learning outcomes as a form of “sham compliance” for colleges with accreditors.

So does Bob Shireman, a senior fellow at the Century Foundation and former official at the U.S. Department of Education. In a recent essay, Shireman called learning outcomes “worthless bean-counting and cataloging exercises that give faculty members every reason to ignore or reject the approach.”

The Measuring College Learning project, which Arum has helped lead, seeks to change that dynamic by putting faculty members in charge of determining how to measure learning in six academic disciplines. After more than two years of work, the project has defined the “fundamental concepts and competencies society demands from today's college graduates” in biology, business, communication, economics, history and sociology.

The project’s initial results are included in a newly released book by Arum, Josipa Roksa, an associate professor of sociology and education the University of Virginia, and Amanda Cook, a program manager at the American Association of State Colleges and Universities.

The Social Science Research Council has overseen the Measuring College Learning project. (Arum and Roksa were Academically Adrift’s coauthors and Cook previously worked for the council.) The Bill & Melinda Gates Foundation and the Teagle Foundation provided funding.

Besides being controversial with professors and some faculty unions, learning outcomes are hard to devise, the authors write in their introduction to the discipline-specific white papers that are featured in the book, even compared to other common metrics aimed at holding colleges and academic programs accountable.

“Measuring graduation rates and early-career earnings, though not without challenges, is much easier than measuring student learning, given the absence of agreed upon measures,” according to the book, which is titled Improving Quality in American Higher Education: Learning Outcomes and Assessments for the 21st Century. Jossey-Bass is the publisher.

And those challenges can only be tackled by the professorate, say Arum and his coauthors.

“The fact that conversations about higher education outcomes and how to measure them are fraught with difficulties makes it that much more important for higher education faculty to contribute, and indeed, lead the way, especially when it comes to defining and measuring what students should be learning,” they write.

Six Frameworks

To come up with learning outcomes in the selected six disciplines, which collectively account for more than 35 percent of undergraduate student majors in the U.S., the Measuring College Learning project began by contacting disciplinary associations in each field. Those groups helped select 10 to 15 faculty members to lead the work -- a total of 70 professors participated.

New America, a D.C.-based think tank, earlier this year released a report on assessment by Fredrik DeBoer, a writing instructor at Purdue University. DeBoer said he likes the project’s faculty-led approach.

“Going through professional associations is the right way to do it,” he said.

The project sought to have each panel of experts represent a broad range of colleges, geographic locations and sub-disciplines. The majority work at four-year institutions, but some are at community colleges or academic associations. And most have worked on other faculty-led efforts to measure learning.

“They are people who are doing this work,” said Arum, “and have been for decades.”

The faculty panels tried to identify “essential concepts,” meaning complex ideas, theoretical understandings and ways of thinking central to each discipline. They also came up with “essential competencies,” which the book said are “disciplinary practices and skills necessary to engage effectively in the discipline.”

The resulting concepts and competencies are not intended to be fixed, universal or comprehensive, the book said, calling them a “reasonable and productive framework.”

In sociology, for example, one of the five essential concepts is the “sociological eye,” which means students “will recognize key theoretical frameworks and assumptions upon which the discipline is founded and differentiated from other social sciences.” That underpinning, the book said, includes founding theoretical traditions (Marx, Weber, Durkheim, Mead), a critique of rationality to explain human behavior and how social forces affect individuals.

Socialization is another essential concept, which is defined as students understanding the relationship between self and society, and how the self is socially constructed and maintained at multiple levels.

On the competency side, the panel said undergraduates in sociology should be able to apply scientific principles to understand the social world, evaluate the quality of social scientific data and use sociological knowledge to inform policy debates and promote understanding, among other essential competencies (there are six total).

Examples of the Project's Essential Concepts and Competencies

Economics: Essential Competency 3

Students should be able to:

Work with mathematical formalizations of economic models (e.g., graphs, equations) and perform mathematical operations (e.g. basic calculus)

Confront any observed correlation knowing it is not evidence of causation and explain why

Explain the design and results of laboratory and field experiments (i.e., randomized controlled trials)

Explain the conduct, results and limitations of basic econometrics (e.g. hypothesis testing, interpreting ordinary least squares estimates, omitted variable, included variable and selection biases).

Communication: Concept 2

A communications graduate should know and understand:

Relationality

Communication is inherently transactional and collaborative; as a human behavior, to communicate is to engage with others, share meaning, make arguments, speak and listen, and transact together in a state of consubstantiality. A fundamental concept, then, of communication is relationality, or how and why relationships form and are developed among communicating individuals, groups and audiences.

Biology: Competency 6, Concept 4

Students should be able to:

Appreciate and apply the interdisciplinary nature of science. Under the essential concept of pathways and transformations of energy and matter, this means being able to explain the transformations of energy between the plucking of a note on a guitar to the time a singer resisters the note played.

Each discipline’s resulting framework is different, of course. For example, the book said some use fields tend to feature standard introductory courses. Some don’t. (Summaries of the six panels' defined outcomes are available here.) Yet the authors said the project shows it is possible to reach a general consensus on how to measure learning.

“It may be difficult to list everything students should know and be able to do,” the book said, “but when faculty are asked to focus on essential elements they are quite ready, willing and able to define priorities for student learning in their disciplines.”

One of the project’s goals is for the white papers to be used for the creation of tests, or assessments, that colleges can use in a standardized way. However, those possible assessments must be voluntary, the book said, and based on multiple measures rather than a simple box-checking, multiple choice test.

In economics, for example, the book said an assessment could include an open-ended data analysis simulation. And a history assessment could feature a digital archive of documents the test taker was asked to sift through and interpret.

Existing, discipline-specific assessments are not high-quality, said Arum. The project’s leaders have been in touch with assessment firms and possible funders about creating the new tests. Arum said the goal is for the assessments to be publicly available tools in three to five years.

“Why not attempt to provide faculty with other tools?” he said.

Mixed Reviews

The book said the Measuring College Learning project seeks to complement existing efforts to identify and measure student learning. Those include the Association of American Colleges and Universities’ VALUE rubrics and LEAP learning outcomes, as well as the Lumina Foundation’s Degree Qualifications Profile (DQP) and Tuning projects.

However, Measuring College Learning (MCL) advances that knowledge, according to the book. For example, Arum said while Tuning is aimed at a more individualized form of assessment design, this project aims to create tools at a national level -- to measure what Tuning articulates.

The book includes essays by top experts who have worked on learning outcomes. While largely positive, the essays also identify challenges. Peter Ewell, the president of the National Center for Higher Education Management Systems, for example, said the project bucks the trend of moving away from test-based assessment in higher education.

“Admittedly, the current lack of use may be due to a dearth of suitable instruments -- a condition MCL hopes to alleviate,” he wrote. “But the fact remains that its stance on assessment technology puts the project out of the mainstream of current assessment practice at the program level.”

Carol Geary Schneider, AAC&U’s president, praised the project in her essay. But she wrote that her group’s VALUE rubrics also could be adapted for disciplines.

In a written statement, Geary Schneider said the real challenge confronting higher education is not just documenting student learning, it’s the need to “significantly raise the level of students’ knowledge, intellectual skills and meaningful accomplishment.” And that takes resources and money, she said, which is a problem in a time of “scandalous” disinvestment by many states in public higher education.

However, Schneider praised the project for putting faculty at the helm.

In addition to the strong faculty role, DeBoer said the project’s panels appeared to be on the right track with their frameworks. “These broad outlines are largely in keeping with what I’ve been calling for myself.”

Shireman was less impressed. He said the book is correct to emphasize “what students can do” rather than requiring them to know “lists of facts.” But that evidence already exists in the everyday assignments student do in college, he said, and does not require standardized assessments.

Furthermore, Shireman said the project's identified concepts and competencies are not defined specifically and fail to provide the “roadmap” the book promised.

“They say little to nothing about learning or student performance,” he said via email. “They do not provide any guidance as to the level of understanding that would connote mastery.”

Part of the problem, according to Shireman, is that it’s futile to attempt to summarize learning with “supposedly pithy” statements.

“This book should put a nail in that coffin,” he said, “even though that’s apparently not what the authors intended.”

For his part, Arum said he’s hopeful the majority of faculty members will welcome the project’s first draft of learning outcomes. That’s because the goal is to give them responsibility and ownership to drive the work “in a way that’s helpful to them.”

Even so, professors might have no other choice, the book argues, because policymakers and the general public will continue to pressure colleges to demonstrate value, including through some form of standardized assessment of student learning.

As Arum said, faculty “can’t just be reactionary and resist the use of measurements. That’s a non-sensible position.”

Next Story

Written By

More from Learning & Assessment