Thank you all for your insights and contributions. The discussion brought together different experiences/views but most seem to agree on the core principles of the question.
Before going any further, I will explain my perspective:
Even in laboratory experiments where all the conditions are controlled, scientists allow themselves a level of error but they try to make it as small as possible. Therefore, I am not talking about 100% sure of the results with human (complicated being) interventions.
My take away from the discussion is that we all strive to be “objective and inclusive “as much as we can. The latter expresses our “confidence interval'' and “degrees of freedom”.
The discussion brought in a wide array of subjects pertinent to independence/impartiality/neutrality of evaluation. From discussing the concepts to suggesting work methodologies, contributors enriched the discussion.
Different contributions brought up important factors that may influence the independence, neutrality and impartiality of evaluators. Mr. Jean de Dieu Bizimana and Mr. Abubakar Muhammad Moki raised the issue of the influence of evaluation commissioner and the termes of reerences on these concepts. Dr Emile HOUNGBO brings in the financial dependence of the evaluator, especially if the organization/team financing the evaluation is also responsible for the implementation of the intervention. Mr Richard Tinsley sees that even when funds are available, evaluators may lack neutrality in order to ensure future assignments. Mr Tinsley gave the example of farmer organizations that do not play the role intended but still are pushed on small holders. From Mr Lasha Khonelidze perspective, “open mindedness” of the evaluator is important in bringing in diverse points of view, but as important is to ensure that the evaluation is useful to end users (these are to be defined in ToR).
Mr. Sébastien Galéa suggests working on norms/standards at the level of programme management in the field. He also brings in the importance of peer to peer information and experience exchange around the world (ex evalforward). He gracefully shared a document, the title of which clearly indicates that the aim of the evaluation is better results in the future, either through subsequent interventions, or through adjustments to the evaluated intervention. The paper also explains independence/impartiality from ADB perspective and how this organization worked on these principles. In my view, Weiss' paper shared by Mrs Umi Hanik came in as a complement. Weiss' paper analyzes program development, implementation and evaluation. The main idea is that programs are decided according to a political environment and since evaluation is meant to guide decision-making, it also shares the pressure from the political participants in the programme. Thus for a program participant, public acceptance is more important than program relevance. Therefore, I think this is where evaluation independence/impartiality/neutrality come into play.
Abubakar Muhammad Moki added that some companies recrute evaluators they know, which is confirmed by Mrs Isha Miranda who added that this fact impacts the quality of evaluation, leading to decrease in the quality of evaluation reports (evidence-based argument) :) . Mr. Olivier Cossee added that recruited evaluators should be “reasonably neutral”. This, I believe, puts pressure on the evaluation commissioner to verify/ check for “reasonably” neutral evaluators and introduces another variable : to what extent is the evaluator reasonably neutral ? (can we refer to behavior studies?). For Mrs Siva Ferretti, the evaluator's individual choices are influenced by her/his culture and thus, it is difficult to be “really” inclusive/neutral. Podcast shared by Mrs Una Carmel Murray gives an example of inclusion of all participants in order to dilute the subjectivity of the researcher. Also, Mr Abado Ekpo suggests taking time to integrate the logic of the different actors and understand them in order to conduct an objective evaluation. In addition, Mr Steven Lam and Mr Richard Tinsley discuss the importance of the methodology in bringing in all participants’ interest. Mr Lal Manavado summarized the reflection in terms of accountability to fund providers or politicians or social groups. My view is to be accountable to the project objectives. Were they achieved or not? if not why not?. if achieved for whom?
Mr. Khalid El Harizi added that the availability of data/information at the start of the evaluation as well as the capability of evaluators to synthesize the data are important. It is to be noted however that, even when data are available, they may not be easily accessible to evaluators. This is confirmed by Mr Ram Chandra Khanal who brought up the issue that lack of time and limited access to information on stakeholders will impact data collection.
This discussion clearly raised the issue of term definition. As previously stated, end users need to be defined. Also Mrs. Svetlana Negroustoueva asked for examples to contextualize the term of independence. In addition, Mr. Thierno Diouf raised the importance of defining all the terms discussed from the perspective of all stakeholders and evaluators. These definitions should be clear in guides, standards and norms.
Mr. Diagne Bassirou talks about loss of quality and deepness of analysis with “too much” objectivity since the evaluator may not know about the socio-demographic conditions of the area. In my perspective and as Mr Lahrizi stated, there are data / information available (or should be available) at the start and the commissioner should make these available to the evaluation team. My experience is that there is always an inception meeting where these issues are discussed and cleared. Ability to analyze these data/information would be a matter of competence of the evaluator and not his/her independence nor impartiality.
In summary it is possible to achieve a relevant degree of impartiality/neutrality/ in evaluation, given, the terms of references are clear, data are available and independence of the evaluator is ensured through sufficient funding and administrative independence. The evaluator needs to do work on self in terms of beliefs, culture and biases. Methodological approaches could help reverse possible unbiasedness.
Program funders as well as program managers and evaluators are accountable for the changes brought by interventions. Could we link this reflection to cost-social benefits for development intervention?
At last, this is probably an “open ended question” and could lead Therefore, let’s keep the discussion open.
Some exchanged links: