For what it’s worth

Eric Duval raises some interesting issues about how work is cited, and asks us to consider the following set of questions:
• which of your papers has been cited most often?
• who has cited you most often?
• which papers cited a particular publication of yours?
• whether more and more or less and less people are citing you over the years?
• whose citing behavior is close to yours?
• which conference or journal contains most citations of your papers?
• which conference or journal contains you cite most often?

Steven Harnard asks similar questions; How much is my research read, used, cited, and built-upon, in further research and applications? How much of a difference does it make that I did it at all? He goes on to argue for open and digitally accessible research outputs: “what is needed is that all university research output should be accessible and hence assessable online — and not only the references cited but the full text.

Eric’s post got me thinking about the broader question of ‘what does it mean to be an academic nowadays?’, ‘What counts?’, or more controversially ‘Who’s good and who isn’t?’ Of course in the UK at the moment these kinds of questions are uppermost in the minds of those involved in research as the next Research Assessment Exercise (RAE) is nearly upon us. In a recent editorial for ALT-J I reflected on these and other issues and in particular looked at what the RAE offers us in terms of positioning our own research discipline and how it might help us to answer the question – what constitutes good e-learning research?

Extract from ALT-J… An overview of the RAE

The RAE is intended to provide a periodic review of the quality of research in Higher Education institutions and is important not only because it essentially results in a ‘research league table’ but also because significant funding is attached to the process for those areas deemed to be research excellent. A newspaper article last year (Guardian Unlimited 2006) highlights the central role that the RAE has played in the direction and focus of UK research activities and associated culture:

“Over two decades the RAE has become an obsession for British academics and the ratings - from one to five-star - have made or broken the reputations of university departments. On the basis of the RAE judgments made at roughly five-year intervals depend billions of pounds of funding from the Higher Education Funding Council for England and its equivalents in Scotland, Wales and Northern Ireland.”

The exercise is designed to be peer reviewed by a panel of experts drawn from the relevant research community. There are 67 units of assessment (UoE). Each academic submitted is judged on a number of criteria. The most important of these is the four publications (journal articles, books, etc.) they choose to submit as evidence of their academic standing published during the review period. Gradings in the new system range from 0 - 4 (table one). In addition each person submits an ‘indicator of esteem’ which includes information on their success in research funding, involvement as referees or journal editors, invited talks and keynotes, or other recognised evidence of academic worth and ‘international standing’.

  • Four star Quality that is world-leading in terms of originality, significance and rigour.
  • Three star Quality that is internationally excellent in terms of originality, significance and rigour but which nonetheless falls short of the highest standards of excellence.
  • Two star Quality that is recognised internationally in terms of originality, significance and rigour.
  • One star Quality that is recognised nationally in terms of originality, significance and rigour.
  • Unclassified Quality that falls below the standard of nationally recognized work. Or work which does not meet the published definition of research for the purposes of this assessment.
  • Taken from ‘RAE 2008: Guidelines on submissions’, June 2005

    Criticisms of the exercise

    Despite the fact that the exercise is peer reviewed there have been significant concerns voiced over its validity. See for example a recent article in the Independent. Firstly, many argue that the quality across the panels varies significantly, with wide disputes and differences in what is deemed ‘high-quality research’. Others feel aggrieved that money-rich disciplines in their opinion fair much better; the new post-92 universities make similar arguments and it is true that they have faired less well overall. And it is widely known that one of the purposes of the introduction of the exercise was to make research funding more selective, with money biased towards the university elite. Many also argue that the over concentration on the RAE and its importance has further deepened the divide between research and teaching, with teaching now more than ever suffering as the poor sister. For individuals in research-intensive institutions deciding where to concentrate their efforts is simple; research wins every time… if you want to get promoted that is. The Guardian ran an interesting piece recently, discussing a publication by the British Academy defending the value of peer view

    Lessons from the RAE

    This is the third RAE exercise I will have been involved with. The first was in my original academic area, Chemistry. In a sense submission then was fairly straightforward for me – I published in the standard recognised journals for my area, I was lucky enough to be collaborating with a good range of internationally recognised researchers. When I moved into the area of e-learning things became less clear; as a relatively young research area should I be publishing in new e-learning focused journals or more main-stream, well established education journals? What should the balance be in terms of standard empirically based submissions and more risking, but perhaps more innovative approaches?

    I have been involved in helping with the RAE mock exercises at my current institution (the Open University) and my previous institution, Southampton University, both in terms of developing research group narratives and peer reviewing exercises. It has been interesting and educational but reiterates for me the extremely subjective nature of the process. Here are a list of some of my thoughts when reading and re-reading articles. I know from talking to other colleagues involved in the process that many others echo similar concerns.

    1. Below is a list of some of potential ‘indicators of worth’ for a paper:

  • It covers an important and topical area
  • It is something which is likely to be cited a lot by others
  • It is a key positional paper or review which gives a definition of an area
  • It provides a critically reflective piece which provides new insight/ways of thinking
  • It is something which will have impact – on policy makers or practitioners
  • It provides the development of a new theory, framework or model
  • It includes good solid empirical studies which provide interesting results and add to the area
  • It has a good grounding in the literature and evidence of knowledge of key issues
  • There is evidence of novel, new thinking, new approaches
  • There is evidence that the findings are having impact beyond one institution
  • There is a clear articulation of the methodology and a critique of the approach adopted
  • The submission demonstrates evidence of linking to higher agendas of the day – policy directives or evidence of ideas being embedded in or aligned with funding council programmes
  • A retrospective piece showing how the work builds on or provides a foundation for other work that followed.
  • The paper provides clarity and insight into a well recognised problem.
  • 2. On reading the papers a raft of unresolved issues came to mind:

  • In such a fast moving area as e-learning, is there an issue with papers becoming dated; what papers ‘age’ well and what are their characteristics?
  • Academics submit four artefacts, these are supposed to be reviewed separately, but how much importance will the panel actual put in the coherence of the set in terms of the breadth of expertise demonstrated by researcher across four papers chosen? Will overlapping submissions be viewed unfavourably?
  • What issues are there around individual contributions and multiple authorships? Is collaboration good or bad? Do international co-authors add benefit?
  • How will the panels really view other outputs of research – for example a web site of resources or a technological artefact?
  • How is the issue of comparability going to be addressed? One submission may represent five years of work, another a small-scale study?
  • One could argue, if you take a very strict reading of the criteria in table one, that very rarely is work ‘seminal’ i.e. 4* - but what does this mean in terms of the message it sends on the perceived worth of UK research?
  • How a paper is viewed inevitably depends on who is reading the paper – if it is an expert in the field they either may be more critical (because they are very aware of the subtleties of the work and/or may have a different take on the area) or interested (because it aligns with their own work) compare with a researcher from another area who might rate a review paper highly because if provides an overview/summary of the area or might be disinclined to a paper because it adopts a radically different epistemological position than their own.
  • The RAE is a mammoth exercise, which has absorbed enormous amounts of time and resources, for questionable benefit. RAE 2008 will be the last exercised carried out in this painstaking and complex process. The Government has announced that the exercise will be replaced by a much lighter weight metrics-based system. However it is not clear what this will involve and no doubt a barrage of complaints will surround its introduction as well. It seems it is impossible to get the balance right. Whilst I don’t think many in academia would argue that it is not appropriate for some form of peer-assessment and recognition of worth (after all surely this is the core of how we work anyway through the peer-review funding and publication mechanisms) somehow how it has been done to date does not seem to have been thought through enough. I would advocate for a more interactive, formative and constructive approach with feedback on a timely basis to academics in terms of helping them maximise their research potential whilst balancing this in terms of ensuring high-quality teaching, with strong synergies between the two.

    One Response to “For what it’s worth”

    1. Sarah Stewart Says:

      Hello Gráinne, I was very interested to read your post. AS we do with a lot of things, we are following the UK in the education sector in New Zealand, with a similar research assessment to the RAE (PBRF-performance based research fund). As a ‘beginning’ researcher, I have just gone through the review and found it to be a very time consuming process. It is interesting how I have changed my thinking about my outputs - everything done now is done with PBRF in mind. However, there can be a real tension between what is good for PBRF and what is good for my profession (I am a midwifery lecturer) eg I do some research involving NZ midwives and feel that I should publish in the national midwifery journal (I have an ethical responsibility to do this). However, our national journal has no real academic ‘cudos’ - where do I publish - the national professional journal and ‘waste’ my publication or the international academic journal that will not be accessed much by midwives? PBRF brings up soem interesting dilemmas. best wishes Sarah

    Leave a Reply