RE: How to evaluate science, technology and innovation in a development context? | Eval Forward

Dear all, 

It is great to see such rich and insightful contributions. It seems there is a wide consensus on the importance of the use of mixed methods for evaluating science and relying on QoR4D frame of reference. I really enjoyed reading your opinions and what you found challenging during your experiences. 

With an additional day for discussion (through tomorrow, April 13th) we still hope for additional contributions. The following may further insight and guide those who have not shared their views and experiences yet, especially outside of CGIAR context.  

There is an interesting possible debate between what funders may find important to evaluate against the priorities for Southern researchers. My understanding is that funders are widely interested in results (outputs, outcomes, impacts) and that OECD DAC evaluation criteria “impact” and “efficiency” are of particular relevance in terms of accountability and transparency to demonstrate that tax-payer money is used wisely. However, Southern researchers prioritize the need for research to be relevant to topical concerns, to the users of research and to the communities where change is sought. The importance of the relevance dimension was highlighted in several contributions and is mostly related to the importance, significance, and usefulness of the research objectives, processes, and findings to the problem context and to society.  

How could relevance be measured in a way that the needs of southern researchers and funders converge? When talking about impacts, Raphael Nawrotzki mentions that the impact within a scientific field is still measured best by the number of citations that an article or book chapter receives.  

The question to ask from the audience is “How the impact within a scientific field can reflect the importance of a certain research project and its impact/contribution to the society where change is sought?”. There seems to be a consensus on importance of ‘relevance’ component and the relation between the research output and the original Theory of Change, and a process that was followed in its development. Can this be aligned to measuring ‘relevance’ in a way that can also be considered solid and credible to funders?  

And last but not the least, what about practice “Have you seen monitoring, evaluation and learning (MEL) practices that could facilitate evaluations of science, technology and innovation?” 

We look forward to more sharing to close off this discussion, and identifying opportunities for further engagement