As one of the Science-Metrix coauthors of the Technical Note listed in the background documentation for this discussion, I wanted to provide a bit more context aboutthe general orientation that my coauthor, Christina Zdawczyk, and Igave to this framework for the deployment of bibliometric strategies as part of CGIAR QoS evaluations.
You may notice that the ubiquitous publication counts and citation impact indicators were afforded only a small portion of our attention in this Technical Note. One of our intentions with this note was to showcase how bibliometrics now offers indicators for a much broader range of dimensions, including cross-disciplinarity, gender equity, preprinting as an open science practice, or theprevalence of complex multi-national collaborations.
That is, there is (in our opinion, often untapped) potential in using bibliometrics as indicators of relevance and legitimacy. Simultaneously, some of the bibliometrics we have suggested can also be used as process or even input indicators, instead of their traditional usage as output indicators of effectiveness. For instance, bibliometrics can be used to monitor whether cross-disciplinary research programs are indeed contributing to increased disciplinary integration in daily research practice, considering that project teams and funders often underestimate the complexity of such research proposals. Moreover, dedicatedsupport is often required for such projects, at levels that are seldom properly planned for (Schneider et al 2019). With the caveat that output publications to be monitored are only available later in the project life cycle, findings on cross-disciplinarity can help modulate and re-adjust research programs and associated support instruments on a mid-term timeline.
As you can see, our view is very much one of using program evaluation tools, including bibliometrics, to improve research and innovation governance and support mechanisms for project teams, rather than to rank performances.
Hope you enjoyed or will enjoy the read.
Etienne
Senior analyst, Science-Metrix (Elsevier)
Reference
Schneider, F., Buser, T., Keller, R., Tribaldos, T., & Rist, S. (2019). Research funding programmes aiming for societal transformations: Ten key stages. Science and Public Policy, 46(3), pp. 463–478. doi:10.1093/scipol/scy074.
RE: How to evaluate science, technology and innovation in a development context?
As one of the Science-Metrix coauthors of the Technical Note listed in the background documentation for this discussion, I wanted to provide a bit more context about the general orientation that my coauthor, Christina Zdawczyk, and I gave to this framework for the deployment of bibliometric strategies as part of CGIAR QoS evaluations.
You may notice that the ubiquitous publication counts and citation impact indicators were afforded only a small portion of our attention in this Technical Note. One of our intentions with this note was to showcase how bibliometrics now offers indicators for a much broader range of dimensions, including cross-disciplinarity, gender equity, preprinting as an open science practice, or the prevalence of complex multi-national collaborations.
That is, there is (in our opinion, often untapped) potential in using bibliometrics as indicators of relevance and legitimacy. Simultaneously, some of the bibliometrics we have suggested can also be used as process or even input indicators, instead of their traditional usage as output indicators of effectiveness. For instance, bibliometrics can be used to monitor whether cross-disciplinary research programs are indeed contributing to increased disciplinary integration in daily research practice, considering that project teams and funders often underestimate the complexity of such research proposals. Moreover, dedicated support is often required for such projects, at levels that are seldom properly planned for (Schneider et al 2019). With the caveat that output publications to be monitored are only available later in the project life cycle, findings on cross-disciplinarity can help modulate and re-adjust research programs and associated support instruments on a mid-term timeline.
As you can see, our view is very much one of using program evaluation tools, including bibliometrics, to improve research and innovation governance and support mechanisms for project teams, rather than to rank performances.
Hope you enjoyed or will enjoy the read.
Etienne
Senior analyst, Science-Metrix (Elsevier)
Reference
Schneider, F., Buser, T., Keller, R., Tribaldos, T., & Rist, S. (2019). Research funding programmes aiming for societal transformations: Ten key stages. Science and Public Policy, 46(3), pp. 463–478. doi:10.1093/scipol/scy074.