Engaging in “Impact and Evaluation”

Plenary session, Monday, 23rdSeptember 2013, 16:00-17:30, I-201 Conference Hall, 2nd floor

One criticism of the fields of Social Sciences and Humanities (SSH) concerns the transfer of evaluation standards between fields, without due consideration of their specificities when it comes to “measure” research productivity. Such complaints have their valid points as the main publication output can be journal articles or monographs; team sizes can be smaller or bigger and scholars can work and publish alone or with others. Another concern is the social impact of SSH research and how to evaluate SSH as societal stakeholders.

Many SSH communities have started to develop or apply their own methods for evaluating various kinds of output and results of SSH research which go beyond traditional bibliometrics. While there are no standard references and databases for publications in the SSH domain that account for the vast diversity of fields and especially for their multilingualism, social indicators and other assessment tools are either in their infancy or very unevenly distributed. Finally, SSH fields and disciplines behave rather conservatively when it comes to applying Open Access. The session will discuss pros and cons of different ways of evaluating SSH and will suggest concrete measures to be taken and developed from an SSH perspective.

Keynote: Johannes Angermüller, University of Warwick, United Kingdom
Comment: Thed van Leeuwen, Leiden University, Netherlands
Comment: Rūta Petrauskaitė, Research Council of Lithuania
Moderator: Paul Boyle, Science Europe, United Kingdom

Johannes Angermüller, University of Warwick, UK
Doing Research and Evaluation in the Social Sciences and Humanities
Throughout their careers, academic researchers are subject to formal and informal, institutional and non-institutional practices through which they are classified and categorized and occupy their positions (e.g. as a “specialist of W”, a “disciple of X”, a “member of the scientific council of the foundation Y”, an “enemy of Z”, a “professor” etc.). If some end up in more, better and higher positions than others, their success is based on the practical skill in articulating creative responses, with whatever resources they can use, to contradictory constraints and new situations. It is this capacity that constitutes excellent researchers and research in the perspectives of the researchers: they need to carve out a place in a world where there is not one common standard of excellence but many different, even incommensurable ways of doing good research. Against this background, the professional evaluation of researchers and their products can turn out to be an important part of the practice of research if it relates to the many non-professional practices of assessing the quality of research within the research community. My contribution will discuss a few examples from my research on the ways in which academic excellence is practically achieved in different fields and systems. As a conclusion, I will point out the considerable practical expertise that actors inside as well as outside the scientific community need to mobilise in order to account for the heterogeneity of academic practices, especially in the Social Sciences and Humanities with their long traditions in investigating the complexity of meaning, culture and knowledge.

Thed van Leeuwen, Leiden University, NL
Multiple Perspectives on Evaluation Practices in the SSH (and Law): Approaches from the Netherlands
In the Netherlands, we have a longstanding tradition of disciplinary research assessments. In the natural, life, and medical sciences, these were often accompanied by quantitative analyses. In the last ten years, the Royal Academy (KNAW) in the Netherlands has started a number of initiatives to develop more awareness among SSH scholars with respect to research assessment, and the way issues around research assessment could be tackled. In 2011 and 2012 two Advisory Councils of the KNAW have delivered guidelines on how the research assessment of the SSH domains could be designed.

Important elements in these guidelines, starting from peer review as the overarching principle, were a strong focus on the various missions from which a variety of activities of SSH scholars start, as well as a stronger emphasis on the scientific productivity as well as the societal relevance or quality. With these two dimensions, criteria for assessment would be output, usage, and recognition.

Within our institute at the CWTS, we have recently established a new research program to cover a wide range of topics related to bibliometrics and research assessment. Several working groups are focussing on the development of instruments to support research assessment in the social sciences, humanities, and law, on the development of indicators supporting analyses of societal quality, and on the way quantitative measures applied in assessment and evaluation situations influence the primary knowledge production processes.

In my comment I will show how we succeeded in linking the assessment criteria and indicators defined by the two advisory councils to the registered output of social scientists and humanities scholars, across a wide variety of scientific outputs and forms of usage and recognition. It is important to stress that this development is only in the early stages and its outcomes need to be discussed with the involved researchers and research administrators.

Rūta Petrauskaitė, Research Council of Lithuania
Insights from the SSH Communities
SSH impact and evaluation are themes frequently occurring in the responses of researchers to our request for consultation. One of the focal points is certainly the discussion of obstacles arising from naively applied research assessment strategies to SSH. The dominant evaluation models remain oriented towards monodisciplinary research outputs and there is an urgent need for new interdisciplinary models of evaluation. The problem needs to be tackled from a wider perspective, recognising that European universities still function in highly restrictive disciplinary settings and therefore constantly re-instantiate disciplinary boundaries. Furthermore, the persistence of publication models in the tradition of printed press and monolingual English academic discourse needs to be replaced by new approaches advocating a new open, interdisciplinary and multilingual publication culture. Another important issue is the lack of guidelines on the evaluation of such research outcomes. With increasing standardization in favour of measurements and indicators stemming from scientometric analysis of the sciences, SSH should involve themselves in the development of methods but also guidelines on how to evaluate and valuate their research. In regard to the practicalities of embedding SSH into the “Horizon 2020” framework programme, this brings about that SSH researchers should be involved in the early steps of defining working programs, evaluating project proposals and reviewing the degree of interdisciplinary. Individual experts and evaluators with previous interdisciplinary careers as well as mixed teams of monodisciplinary experts are of paramount importance. In general it is suggested to strengthen the research on research, especially on specificity and plurality of evaluation models.

Moderator: Paul Boyle, Science Europe