Thursday, October 22, 2009

On measuring the "impact" of research

There have been various debates, such as here, and petitions, such as here, concerning the role of "impact" in the forthcoming Research Excellence Framework (REF) in the UK. British departments (or "units of assessment") are assessed every few years to measure their research quality. The better a department's score, the greater funding it would earn via funding councils. Thus, research excellence pays and highly respected work means ££ in UK higher education circles.

In the past, these evaluations have been somewhat different. For example, the last evaluation was a "Research Assessment Exercise" (RAE) that measured research quality from January 1, 2001 through December 31, 2007. Results were announced in 2008. Departments were measured by the quality of their publications, research environment, and "esteeem" --- and given g.p.a. out of a possible perfect 4.0. Before RAE 2008, departments were ranked in categories of "5" or less.

The current REF plans look likely to assess departments between January 1, 2008 - December 31, 2012 with the results announced in 2013. Publications and environment will remain as categories, but "esteem" will now be replaced by "impact" --- and impact will have a greater weight than esteem. The current proposal is publications 60%, impact 25%, environment 15%.

These plans may look far from unreasonable. After all, why shouldn't impact have some importance? There are several important concerns worth raising:

1. We don't know what "impact" is. What counts as "impact"? Is a popular blog impact? This has not yet been defined.

2. We don't know how "impact" will be measured. It is proposed that per every five-ish department members every unit will offer a "narrative" of "an impact case study": how do we know one narrative is better than another? This is left very unclear.

3. "Impact" in the short-term is bad for UK research in the long-term. The impact period is 2008-12. One problem with this is that departments will do what they can to demonstrate impact for this period because impact in this period will benefit scores. This may be counterproductive to producing long-term impact.

4. There are doubts this is genuinely about justifying higher education funding to the public. For one thing, we may doubt how a department's "impact case study" narrative submitted in REF 2013 will sway the public. One reason to have this doubt is information is readily provided concerning RAE2008 submissions, but most in the public I speak with don't know of the relevant sites, etc.

For another thing, the Times Higher reports today this story:

"[. . .] Academics have reacted angrily to an internal research council document that says that the Government - not the academic community or the public - is the "primary audience" for its campaign to improve the economic and social impact of the research it funds.

The document, leaked to Times Higher Education, outlines the contents of a presentation made to senior staff at the Engineering and Physical Sciences Research Council. It says: "We need to show (the Government) the importance and relevance of the research we fund to current and future global challenges in order to secure future funding."

Academics are named a "secondary audience" with the public given third priority.

The document says that the campaign "is deliberately blatant in its attempt to address the immediate demand" from the Government to demonstrate impact.

Researchers said the leaked document showed that the research councils were playing politics instead of protecting the interests of scholarship. [. . .]"

5. The REF is already under way! Despite these major concerns, there is a pilot study know on impact . . . that may not be concluded until next year. Thus, it may not be until the REF2013 is halfway finished that we will have any knowledge about how we'll assessed up to 25% of our final scores . . . including the last few years! Thus, today what we do or do not do may or may not count as "impact": we'll be assessed all the same according to a criteria to be announced later that will be applied retrospectively to work that has already happened.

What is the solution? Well, one way forward would be to postpone the measurement of "impact" until post-REF2013. It does not make much sense using an indicator that is still not defined nor tested . . . and won't be so until literally a year or two before the end of the current assessment period. It's important to get these decisions correct. This will not happen by rushing through this proposal.

NOTE: There are echoes here of where the government previously pushed for bibliometric measures to assess all departments, only to withdraw these measures. I would not be surprised if "impact" had the same fate.

No comments: