You have /5 articles left.
Sign up for a free account or log in.

Researchers sabotage their own case when it comes to convincing the public that their work gives value for the billions in public money invested in it, a trial assessment in Australia has found.

Muddy thinking and poor communication about how research improves the nation in economic, social and environmental ways damages perception of its relevance, according to the panels that reviewed 162 research case studies from 12 universities.

However, Jeanette Hacket, chairwoman of the Australian Technology Network, a coalition of universities that, with the Group of Eight, conducted the trial, said it established that it was possible to measure research impact, a first step toward creating a companion piece to the government’s massive audit of the quality of scientific studies, known as Excellence in Research for Australia.

The federal government is also due to report on its own research impact feasibility study.

Impact is already part of audits done by other governments, including the British Research Excellence Framework. Big money is at stake because if the government decides to formally assess research impact, results would very likely influence funding, as ERA performance does.

"Taxpayers put a lot of investment into university research and the community deserves to know the impact that research is having on their lives and in a form they understand," Hacket said.

She conceded that demonstrating benefits had been a tougher ask of the humanities than of the sciences, but said, "We don't see it as a barrier to developing good policy.

"Because it's more difficult doesn't mean we should be put off trying to address it. It's not just science and engineering which is going to make a difference in the community."

Hacket said the impact trial had imposed new demands on academics. "We haven't asked people to articulate and communicate in a common language the impact of research."

Seven expert panels, comprising 75 volunteers, examined benefits evident between January 2007 and May of this year, relating to research up to 20 years old. Assessors from business or industry composed 70 percent of the panel membership; the rest came from universities.

They reported many case studies "did not adequately describe the link between the underpinning research and impact."

"A number of case studies did not reflect what had been asked of them -- that they be impact-based. They did not communicate effectively to a general audience, missing the central message of what was done, why it was done, what difference it made and how the research made it happen."

Nevertheless, 87 percent of the submissions were considered to have considerable or better impact. Submissions were classified according to four pre-existing Australian Bureau of Statistics socioeconomic objectives codes: defense, economic development, society and the environment.

These were considered more relevant than the discipline codes used in ERA and grant research programs. Panels warned that given it took up to an hour to assess each case study, a national version of the exercise could require a substantial investment of time, people and money. The report did not buy into who might run or fund such an audit.

Panels assessed reach and significance of the research, whether the submission satisfactorily linked the research to the impact, demonstrated the nature and extent of the impact, and whether it provided enough supporting material so the impact claims could be verified.

Well-known success stories included Ian Frazer's work on the cervical cancer vaccine Gardasil. Panels said less well-known but also impressive were mineral flotation work at the University of South Australia's Ian Wark Research Institute and work on online legal information systems at the University of Technology, Sydney, jointly with the University of New South Wales.

Next Story

Found In

More from Global