Research Evaluation: an Exploratory Study of its Intellectual Structure

  • Isabel Pinho
  • Cláudia Pinhp
  • Maria João Rosa

Abstract

Carol Weiss defines evaluation as “the systematic assessment of the operation and/or the outcomes of a program or policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy” (Weiss, 1998, p. 4). In the context of science evaluation is a natural issue as a mechanism of certification and control of research quality. As such, research evaluation has also been object of intense study because of the need of improving the quality of research performance and societal and innovation science impact (Donovan, 2011; Lee & Bozeman, 2005; Mingers & Leydesdorff, 2015). Research evaluation is an umbrella concept that crosses diverse micro, meso and macro scales for many purposes and is performed with different approaches. (Aksnes et al., 2017).  This complexity can lead to a fragmented use of the concept or even its misuse, which is a problem that calls for a clear mapping territory concept (Leydesdorff & Persson, 2010).

In this paper we depart from the general research question “What is Research Evaluation?” and we try to answer it by deploying it in three more specific questions:

  1. a) What are the seminal, core, relevant and review documents dealing with the research evaluation topic?
  2. b) What are the structural properties of scientific publications?
  3. c) What is the intellectual latent structure that drives Research Evaluation?

The aim of this paper is then to map the intellectual and cognitive structure of the Research Evaluation topic. To this end, we searched the Web of Science (WoS) database for the period 2006–2016, retrieving a total of 1483 publications. The choice of this time is valid, since it provides not only the scientific production carried out in these 10 years but will also add the cited publications, thus giving the historical perspective as well as the seminal papers in this issue Through direct author citation of these works, we were able to identify their origins and the seminal papers, and through word co-occurrence extracted from the titles and abstracts, the main lines of research were identified.

The mixed methodology used highlights how the potentialities of both qualitative (content analysis) and quantitative (citations) methods can be combined to integrate knowledge through literature review. Using networks citations, we captured the latent building blocks and drivers of this theme. Data visualization and visual analytics were helpful to make an exploration of the literature background; by observing citation networks and its clusters we identified the key papers (seminal and highest papers) that support the different areas of Research Evaluation (van Eck & Waltman, 2017). 

This study results constitute a starting point for a deep literature review on Research Evaluation. Its main result is a citation map with six clusters, each one giving a quick overview of the interlinked territories that provide an understanding of Research Evaluation different areas of knowledge. This is a clear picture of this topic, useful for academic scholars, novice researchers and research managements, interest on research evaluation policy theme or on research evaluation implementation and its management implications. The evolution of Research Evaluation can be divided into 3 stages: 1- traditional peer review; 2 - actual bibliometric analytic focus and 3 - emerging scientific merit of research with quali and quantitative mix approach at project and networks levels.

Next we selected the most 50 relevant papers. Those papers constitute the core sample of the literature review which in turn was subjected to a content analysis supported by the software WebQDA (Fornari et al., 2019; Neri de Souza et al., 2016). To answer the specific questions we build an analysis framework, organized into four analysis dimensions: 1) Purpose of evaluation; 2) Context Levels of Analysis; 3) Metrics and 4) Knowledge Processes. An important finding from this exploratory study is the interdisciplinary nature of the Research Evaluation theme, regarding its background, with a broad scientific interest and practical application. Another finding is that Research Evaluation can be related to performance as a measurable result; using performance-based research approach can bring positive consequences (accountability and transparency) but there is a need to combine quantitative and qualitative participative approaches in order to avoid perverse impacts.

This study also provide finding related to Research Evaluation implementation. Despite the controversies that research evaluation entails, is the growing need for changes that call for the wise use of models, tools, and indicators in order to avoid perverse impacts.

This study is moreover intended to offer a useful conceptual framework to Research Evaluation scientific community, revealing at a big picture the main research lines and landmark papers. The methodology used in this study can be replicated to update this literature review in the future.

Published
2019-10-04