The Research Assessment Exercise (RAE) was an exercise undertaken approximately every five years on behalf of the four UK higher education funding councils (HEFCE, SHEFC, HEFCW, DELNI) to evaluate the quality of research undertaken by British higher education institutions. RAE submissions from each subject area (or unit of assessment) are given a rank by a subject specialist peer review panel. The rankings are used to inform the allocation of quality weighted research funding (QR) each higher education institution receives from their national funding council. Previous RAEs took place in 1986, 1989, 1992, 1996 and 2001. The most recent results were published in December 2008. It was replaced by the Research Excellence Framework (REF) in 2014.
Various media have produced league tables of institutions and disciplines based on the 2008 RAE results. Different methodologies lead to similar but non-identical rankings.
The first exercise of assessing of research in higher education in the UK took place in 1986 under the Margaret Thatcher Government. It was conducted by the University Grants Committee under the chairmanship of the Cambridge mathematician Peter Swinnerton-Dyer. The purpose of the exercise was to determine the allocation of funding to UK Universities at a time of tight budgetary restrictions. The committee received submissions of research statements from 37 subject areas ("cost centres") within universities, along with five selected research outputs. It issued quality rankings labelled "outstanding", "above average", "average" or "below average". The research funding allocated to universities (called "quality-related" funding) depended on the quality ratings of the subject areas. According to Swinnerton-Dyer, the objective was to establish a measure of transparency to the allocation of funding at a time of declining budgets.
A subsequent research assessment was conducted in 1989 under the name "research selectivity exercise" by the Universities Funding Council. Responding to the complaint of the universities that they weren't allowed submit their "full strength," Swinnerton-Dyer allowed the submission of two research outputs per every member of staff. The evaluation was also expanded to 152 subject areas ("units of assessment"). According to Roger Brown and Helen Carasso, only about 40 per cent of the research-related funding was allocated based on the assessment of the submissions. The rest was allocated based on staff and student numbers and research grant income.
In 1992, the distinction between universities and polytechnics was abolished. The Universities Funding Council was replaced by regionwise funding councils such as the HEFCE. Behram Bekhradnia, the directory of policy at HEFCE, came to the conclusion that the research assessment needed to become "much more robust and rigorous." This led to the institution of the Research Assessment Exercise in 1992. The results of the 1992 results were nevertheless challenged in Court by the Institute of Dental Surgery and the judge warned that the system had to become more transparent. The assessment panels in the subsequent exercises had to be much more explicit about the criteria for evaluation and the working methods. In 1996, all volume-based evaluation was removed to account for the criticism that volume rather than quality was rewarded.
The 1992 exercise also stipulated that the staff submitted for assessment had to be in post by a specific date (the "census date") in order to counter the criticisms that the staff that had moved on were still counted in the assessment. This led to the phenomenon of "poaching" of highly qualified staff by other universities ahead of the census date. In the 2001 exercise, the credit for the staff that moved institutions in the middle of the cycle could be shared between the two institutions. In the 2008 exercise, this was abolished.
The assessment of 2008 also brought in a major change. Instead of a single grade for an entire subject area ("unit of assessment"), a grade was assigned to each research output. This was done to counter the criticism that large departments were able to hide a "very long tail" of lesser work and still get high ratings and, conversely, excellent staff in low-graded departments were unable to receive adequate funding. Thus the single grades for units of assessment were replaced by "quality profiles," which indicated the proportion of each department's research against each quality category.
The 2008 RAE used a four-point quality scale, and returned a profile, rather than a single aggregate quality score, for each unit. The quality levels—based on assessment of research outputs, research environment and indicators of esteem—are defined as:
|4*||Quality that is world-leading in terms of originality, significance and rigour|
|3*||Quality that is internationally excellent in terms of originality, significance and rigour but which nonetheless falls short of the highest standards of excellence|
|2*||Quality that is recognised internationally in terms of originality, significance and rigour|
|1*||Quality that is recognised nationally in terms of originality, significance and rigour|
|Unclassified||Quality that falls below the standard of nationally recognised work. Or work which does not meet the published definition of research for the purposes of this assessment.|
Each unit of assessment was given a quality profile – a five-column histogram – indicating the proportion of the research that meets each of four quality levels or is unclassified.
In 1992, 1996 and 2001, the following descriptions were used for each of the ratings.
|2001 & 1996 Rating||1992 Rating||Description|
|5*||5*||Research quality that equates to attainable levels of international excellence in more than half of the research activity submitted and attainable levels of national excellence in the remainders.|
|5||5||Research quality that equates to attainable levels of international excellence in up to half of the research activity submitted and to attainable levels of national excellence in virtually all of the remainder. (Same definition)|
|4||4||Research quality that equates to attainable levels of national excellence in virtually all of the research activity submitted, showing some evidence of international excellence. (Same definition)|
|3a||3||Research quality that equates to attainable levels of national excellence in over two-thirds of the research activity submitted, possibly showing evidence of international excellence. (Research quality that equates to attainable levels of national excellence in a majority of the sub-areas of activity, or to international level in some)|
|3b||3||Research quality that equates to attainable levels of national excellence in more than half of the research activity submitted. (Research quality that equates to attainable levels of national excellence in a majority of the sub-areas of activity, or to international level in some)|
|2||2||Research quality that equates to attainable levels of national excellence in up to half of the research activity submitted. (Same definition)|
|1||1||Research quality that equates to attainable levels of national excellence in none, or virtually none, of the research activity submitted. (Same definition)|
These ratings have been applied to "units of assessment", such as French or Chemistry, which often broadly equate to university departments. Various unofficial league tables have been created of university research capability by aggregating the results from units of assessment. Compiling league tables of universities based on the RAE is problematic, as volume and quality are both significant factors.
The assessment process for the RAE focuses on quality of research outputs (which usually means papers published in academic journals and conference proceedings), research environment, and indicators of esteem. Each subject panel determines precise rules within general guidance. For RAE 2008, institutions are invited to submit four research outputs, published between January 2001 and December 2007, for each full-time member of staff selected for inclusion.
In response to criticism of earlier assessments, and developments in employment law, the 2008 RAE does more to take into account part-time workers or those new to a sufficient level of seniority to be included in the process.
The RAE has not been without its critics. In its different iterations, it has divided opinion among researchers, managers and policy makers. Amongst the criticisms is the fact that it explicitly ignores the publications of most full-time researchers in the UK, on the grounds that they are employed on fixed term contracts. According to the RAE 2008 guidelines, most research assistants are "not eligible to be listed as research active staff". Publications by researchers on fixed term contracts are excluded from the Assessment Exercise unless those publications can be credited to a member of staff who is eligible for the RAE. This applies even if the member of staff being assessed only made a minor contribution to the article. The opposite phenomenon is also true, where non-research active staff on permanent contracts, such as lecturers who have been responsible primarily for teaching activities have also found themselves placed under deeper contractual pressure by their employing universities to produce research output. Another issue is that it is doubtful whether panels of experts have the necessary expertise to evaluate the quality of research outputs, as experts perform much less well as soon as they are outside their particular area of specialisation.
The RAE has had a disastrous impact on the UK higher education system, leading to the closure of departments with strong research profiles and healthy student recruitment. It has been responsible for job losses, discriminatory practices, widespread demoralisation of staff, the narrowing of research opportunities through the over-concentration of funding and the undermining of the relationship between teaching and research.
The official Review of Research Assessment, the 2003 "Roberts Report" commissioned by the UK funding bodies, recommended changes to research assessment, partly in response to such criticisms.
The House of Commons Science and Technology Select Committee considered the Roberts report, and took a more optimistic view, asserting that, "the RAE had had positive effects: it had stimulated universities into managing their research and had ensured that funds were targeted at areas of research excellence", it concluded that "there had been a marked improvement in universities' research performance". Nevertheless, it argued that "the RAE in its present form had had its day", and proposed a reformed RAE, largely based on Roberts' recommendations.
It was announced in the 2006 Budget that after the 2008 exercise a system of metrics would be developed in order to inform future allocations of QR funding. Following initial consultation with the higher education sector, it is thought that the Higher Education Funding Councils will introduce a metrics based system of assessment for subjects in science, technology, engineering and medicine. A process of peer review is likely to remain for mathematics, statistics, arts, humanities and social studies subjects.