EASSH position on Evaluation in H2020 Societal Challenges

In this paper we set out some observations based on the experiences of EASSH evaluators and reviewers in the first rounds of calls in Horizon 2020 and ask whether the process for evaluation in the Societal Challenges should form a part of the upcoming ad interim review of Horizon 2020.

 

EASSH members believe that the evaluation of proposals is most complex in H2020’s third pillar – Societal Challenges - compared to the other two pillars. It requires assessments of both the research excellence and the potential to lead to innovation.  Furthermore, it is based on themes and topics that require contributions from a wide range of disciplinary perspectives. One problem of great complexity faced by the Commission is to design large scale programmes addressing multi-disciplinary approaches to the big societal questions as well as to deliver a reasonably clear understanding of what is expected from successful applications. This problem is often turned to the evaluation process and evaluators, thus putting extraordinary stress and responsibility on them. As a consequence, it is key to figure what is the best evaluation process and who can best evaluate these proposals.

 

In H2020, the Commission is to be applauded for attempting to radically change the approach to evaluation that had been criticised in the past for (a) ‘selection’ bias because of evaluators being drawn only from the narrow pool of leading academic researchers and (b) insufficient interdisciplinary skills of the evaluators to match the multidisciplinary profile of the research. It is necessary for H2020—which combines academic research with user/stakeholder engagement—to maximise the real world influence while taking into full account the multidisciplinary scientific challenge.

 

EASSH questions whether the current evaluation process is fit for purpose, in particular we raise two questions and propose to review three aspects of the evaluation process. We question:

  • whether the process to create a pool of experts with the correct blend and depth of expertise needs to be reconsidered, and 
  • whether the conditions have been created to identify the best multidisciplinary projects to deliver the overall aims of Horizon 2020.

 

The Commission has changed the mechanism to select the experts for the collaborative programmes in H2020. In FP7, project officers selected experts one by one on the basis of their knowledge of the field. In H2020, the Commission opened the experts’ database to a process of self-nomination. The aim is that self-declared experts will populate the database with their CVs and self-identified areas of expertise (using keywords).  From this database ‘experts’ should be identifiable for every given proposal.  EASSH questions whether the Commission - by leaving the construction of the pool of evaluators to self-nomination - can draw on a pool of expertise with sufficient breadth and depth to undertake the evaluations as required.

 

In some preliminary work conducted by EASSH, we estimate that a little over half of the experts listed on the database for Societal Challenge 1 (SC1) and Societal Challenge 6 (SC6) are employed by university or research institutes.  In SC1, more than a third of the evaluators are from industry and governmental organisations. Under SC6, the number of experts coming from the private sector and government is over 40%. The latest report on H2020 participation suggests that in SC6 more than 80% of submitted projects are led by researchers from university and research organisations.[1]  There appears to be a mismatch between the type of proposals received, mostly targeted at peers with a research background, and the pool of experts evaluating the proposals who have a wider range of expertise in research, policy and industry. Combined with sometimes opaque requirements of the call text as well as evaluator’s diverse interpretation of evaluation criteria, this may explain why around 50% of applications fail to meet the minimum quality thresholds.[2]

 

This takes us to the second issue of interdisciplinarity. A truly interdisciplinary approach to solve our challenges is still difficult to achieve – particularly when working across academic disciplines and with teams across different sectors. Given that experts tend to have very specific areas of expertise and competence, their evaluation of broad multidisciplinary projects will quickly take them outside the scope of their core expertise. EASSH is concerned that the evaluation of interdisciplinary projects may work against a robust SSH participation in successful proposals. This is troubling.  SSH researchers are both significant contributors to innovation and are providing insights into understanding contexts of innovation, it is therefore a matter of concern that such research does not play a more significant role in the successful proposals.   

 

Some preliminary work by EASSH has looked closely at the expertise of the evaluators in SC1 which should naturally call for collaboration between SSH and medical/clinical fields. Our initial estimates suggest that less than 6% of expert evaluators have a strong background in SSH. We will continue with analysis work on the other Societal Challenges but we are concerned that such low levels of SSH expertise in the evaluation process will undermine the chances of the embedding/integration of SSH being successful.

 

We suggest that during the upcoming mid-term review the Commission seeks to examine and report specifically on the following points:

 

  1. Review how best to enlarge the pool of evaluators to draw in more those current and previous grant-holders to ensure that evaluators have both the disciplinary expertise and broader experience to identify research excellence and innovation potential as well as direct experience of successfully implementing research proposals
  2. Consider options to give experts a much better understanding of the key aspects of the evaluation framework (careful induction to the complex matrix of criteria for evaluating and selecting projects alongside written guidance provided by the Commission) prior to them taking up contracts as evaluators
  3. Strengthen the induction of experts and ongoing training. Multidisciplinary and multisectors evaluations are not standard practice in research evaluations across Member States, with few exceptions. It is important also to train DG Research and Innovation and/or REA personnel to moderate mixed panels of experts and to ensure consistency of assessment across all the challenges.