RE: How to evaluate science, technology and innovation in a development context? | Eval Forward

I share a few of my observations and learnings from my over 30 years as an evaluator practitioner from the Global South. (1) These views also echo some of the entries, reflections and blogs already published in this thread. Although we, either as evaluators or commissioners of evaluations believe that the key evaluation questions should guide the choice of methods and approaches – in reality the evaluation choices (whether we explicitly state it or not) are privileged. The privilege may be because of  power of expertise, position or resources. Robert Chambers asks pertinently about  “who-whose” reality, who decides, and whose change (2) is more important than others. Answering any of these questions leads us to questioning who has more power to decide – the power could be visible or invisible or hidden . Though there is no question about the importance of quality of science, innovation and technology  and the rigor in evaluations, one may want to also ask if this necessary condition is a sufficient one. When we frame the discussion about evaluation being a transformative process and the importance of accepting different world views,  we also acknowledge the value of indigenous knowledge and the need for decolonization (3). Unfortunately, we often design evaluations at a distance and although we may use participatory methods and tools, unless the findings are used and owned by people and especially those with little voice, in the program, we cannot claim that evaluations have led to public good or used for the benefit of all peoples. Central to this argument is to what extent do values play in our evaluation? With our greater understanding of racism, gender inequities, and various social cleavages it would be difficult to accept the quality of science bereft of any values related to human rights, inclusion and equity.

Feminist thought recommends methods and tools that can nuance the intersectionality of vulnerability since lived experiences may vary dramatically in any intervention design depending on one’s standpoint and intersecting inequities.  It is possible that an emphasis on QoS and innovation could address these concerns but one needs to be particularly vigilant about it and perhaps instead of asking “Are we doing things right?” we must ask, “Are we doing the right things?” A case in point is an example ( 4 ) from India where mango producers had lost 40% of their produce in transit, compelling scientists to introduce a suite of nanomaterials that could be sprayed on the fruit to extend its shelf life. The example indicates that there was a pressing societal challenge which was fast tracked, not necessarily losing the quality of the intervention but perhaps being unconventional in the timeliness and the manner the solutions were rolled out. Also, the solutions were context-specific and highlight that evaluation must measure what is important to measure for communities and peoples, rather than elegantly present evidence that may not be used (5). Southern researchers feel quite strongly about the need for research to be relevant (italics are mine) to topical concerns, to the users of research and to the communities where change is sought. (6) Uptake and influence are as critically important as is the quality of research in a development context.

Even in seemingly open forums such as the internet where one is theoretically free to participate without boundaries, research and related innovations have shown how in a seemingly free and open networked society, power is linked to who controls and makes use of communication pathways and its products. So, who decides what information and knowledge is produced, shared and used and by whom; and finally whose values are represented impact the nature of the knowledge (artefacts) produced. Access is not the same as participation. Access particularly or suggests the ability to make use.... refers to the ability to make use of the information and the resources provided. Also, One may have access to but have no control over resources, which means that participation is limited particularly in decision making It is likely that the ones who are silent and have the least privilege, may actually have the insight and knowledge that is valuable. Women as traditional bearers of local and indigenous knowledge find themselves cut off from the networked society, where information, communication, and knowledge are ‘tradeable goods’ ( 7).

In summary, if we are unable to address the underlying power asymmetries, then the rigour of our science, research and evaluations, though fulfilling an important purpose, will fall short to address the complex demands of our times, in finding solutions to intractable problems and in having our values firmly entrenched in social justice.


  1. Zaveri, Sonal (2019), “Making evaluation matter: Capturing multiple realities and voices for sustainable development” contributor to the journal World Development - Symposium on RCTs in Development and Poverty Alleviation
  2. Chambers, R. (1997). Whose reality counts? Putting the first last. London: IT Publications. ------  (2017). Can we know better? Practical Action Publishing Ltd, Rugby
  3. Zaveri, Sonal (2021) with Silvia Mulder and P Bilella, “To Be or Not to Be an Evaluator for Transformational Change: Perspectives from the Global South” in Transformational Evaluation: For the Global Crisis of our Times edited by Rob Van De Berg, Cristina Magro and Marie Helene Adrian…
  4. Lebel, Jean and McLean, Robert. A Better Measure of research from the global south, Lancet, Vol 559 July 2018
  5. Ofir, Z., T. Schwandt, D. Colleen, and R. McLean (2016). RQ+ Research Quality Plus. A Holistic Approach to Evaluating Research. Ottawa: International Development Research Centre (IDRC)
  6. Singh,S, Dubey,P, Rastogi,A and Vail,D (2013) Excellence in the context of use-inspired research: Perspectives of the global South…
  7. Zaveri, Sonal. 2020. ‘Gender and Equity in Openness: Forgotten Spaces’. In Making Open Development Inclusive: Lessons from IDRC Research, edited by Matthew L. Smith, Ruhiya Kristine Seward, and Robin Mansell. Cambridge, Massachusetts: The MIT Press