Gordon [user:field_middlename] Wanzare

Gordon Wanzare

Monitoring, Evaluation, & Learning Expert

I am a Monitoring, Evaluation, and Learning (MEL) expert with over 15 years of related work experience gained in international and national Non-Governmental Organizations (NGOs) and Governments in East and West Africa. My MEL experience has been in the roles of management, advisory, consultancy, and volunteerism.

My contributions

    • Dear Jean and colleagues.

      Thanks for clarifying that the discussion is not only limited to programs but also includes projects or any humanitarian or development intervention. Very informative and rich discussion. I am learning a lot in the process!

      When I say  "when something is too complicated or complex, simplicity is the best strategy" in the context of Evaluations, I mean we do not need to use an array of, or several methodologies and data sources for an evaluation to be complexity-aware. We can keep the data, both quantitative and qualitative lean and focused on the evaluation objectives and questions. For example, use of complexity-aware evaluation approaches such as Outcome Harvesting,  Process Tracing, Contribution Analysis, Social Network Analysis e.t.c does not necessarily mean several quant and qual data collection methods have to be applied. For example, in OH, you can use document review and KII to develop outcome descriptors then do a survey and KII during substantiation. I have used SNA and KII to evaluate change in relationships among actors in a market system.  I have used SNA followed by indepth interviews in a social impact study of a rural youth entrepreneurship development program. In essence, you can keep the data collection methods to three ( The three legged stool or the triangle concept) and still achieve your evaluation objectives with lean and sharp data. A lot has been written on overcoming complexity with simplicity in different spheres of life, management,  leadership e.t.c.

      On the issue of who decides on the methodology,  evaluator or program team? From my experience, a MEL plan is very clear on the measurements and evaluation methods. And, MEL plans are developed by the program team. Evaluators are asked to propose an evaluation methodology in the technical proposals to serve two purposes - that is to assess their technical competence and to identify the best fit with the evaluation plan. Topically,  the evaluator and program team will consultatively agree on the best fit methodology during inception phase of the evaluation and this forms part of the inception report which is normally signed off by the program team.

      My thoughts. 


    • Greetings to all!

      Great discussion question from Jean and very insightful contributions!

      First, I think Jean's question is very specific - that is how mixed methods are used not just in evaluations but PROGRAM evaluations, right? Then, we know that a program consists of two or more projects i.e. a collection of projects. Therefore, programs are rarely simple (where most things are known) but potentially complicated (where we know what we don't know) or complex (where we don't know what we don't know). Oxford English dictionary tells us that a method is a particular procedure for accomplishing or approaching something. Tools are used in procedures. I am from the school of thought that believes that when something is too complicated or complex, simplicity is the best strategy!

      Depending on the context, program design, program evaluation plan, evaluation objectives and questions, the evaluator and program team can agree on the best method(s) that helps achieve the evaluation objectives and comprehensively answers the evaluation questions. I like what happens in the medical field, in hospitals where, except in some emergency situations, a patient will go through triage, clinical assessment and historical review by the Doctor, laboratory examination, radiology e.t.c then the doctor triangulates these information sources to arrive at a diagnosis, prognosis, treatment/management plan. Based on circumstances and resources, judgements are made whether all these information sources are essential or not.

      Mixed methods is great, but the extent of using mixed methods and sequencing should be based on program and evaluation circumstances, otherwise instead of answering evaluation questions of a complex or complicated program, we end up with data constipation. Using all sorts of qualitative methods at once i.e. open ended surveys, KIIs (Key Informant Interviews), community reflection meetings, observations, document reviews e.t.c in addition to quantitative methods may not be that smart.

      In any case, perhaps, the individual projects within the program have already been comprehensively evaluated and their contribution to program goals documented, and something simple like a review, is what is necessary at the program level.

      When complicated or complex, keep it simple. Lean data.

      My thoughts.



    • Dear members,

      This is an interesting discussion!

      Reporting is part of communication and a report is one of the communication tools. Ideally,  every project,  program, or intervention should have a clear communication plan informed by a stakeholder analysis ( ...that clearly identifies roles, influence, and management strategy). A specific communication plan for evaluations can also be developed. A communication plan typically has activity and budget lines and responsibilities and should be part of the overall project,  program,  or intervention budget. It may not be practical for the evaluator to assume all responsibilities in the evaluation communication plan but can take up some, particularly the primary ones since communication may be a long haul thing especially if it is targeting policy influence or behaviour change, and, as we all know,  evaluators are normally constrained by time. Secondary evaluation communication can be handled by the evaluation managers and commissioners with the technical support of communication partners.

      My take. 


  • I recently used a survey of evaluators to explore the concept of evaluation use, how evaluation practitioners view it and how this translates into their work – in other words, how evaluators are reporting and supporting evaluation use and influence.

    Evaluation use and utilization: an outline

    Michael Quinn Patton’s utilization-focused evaluation (UFE) approach is based on the principle that an evaluation should be judged on its usefulness to its intended users. This requires evaluations to be planned and conducted in ways that increase the use of the findings and of the process itself to inform and influence decisions.

    The Africa

    • Dear all,

      Thank you for the insightful and very helpful contributions to this discussion I initiated. Thanks also for completing the survey, we got 70 responses from evaluators. This discussion has been very rich and as part of continued knowledge sharing, we will synthesize the contributions, analyze the survey responses, and do a blog post in the coming few days which I believe you will find helpful. Please be on the lookout for the blog post on the EvalForward website!

      Thank you! Asante! Merci! Gracias! Grazie! शुक्रिया, ඔබට ස්තුතියි, நன்றி, Salamat, Takk skal du ha, Bedankt, Dankeschön ...

    • Dear all,

      Knowledge, experiences, and thoughts being shared on this topic are very insightful and helpful. Thank you for your contributions! Here are some of the takeaways I have picked so far. More contributions/thoughts are most welcome. Let's also remember to complete the survey.

      Concise evaluation report

      Writing too many pages, i.e. a voluminous evaluation report would make the reader/user bored, reading some things they knew about or looking to get to the point. Very few people, including evaluation users, would spend time reading huge evaluation reports. In fact, even evaluators are less likely to read (once finalized) a report they have produced! Some of the recommendations are: 

      • Make an executive page less than 4 pages (writing on both sides), highlighted on findings and conclusion and recommendations based on findings.
      • Make a summary of fewer than 10 pages, more tables, diagrams, and findings on bullet points.
      • The full report should take 50 pages.
      • Highlight changes (or lack of them) and point out counterintuitive results and insights on indicators or variables of interest. 

      Beyond the evaluation report: use of visuals

      Until evaluations will be mainly perceived as bureaucratic requirements and reports, we will miss out fantastic possibilities to learn better. It is so unfortunate that we assume that "report writing" alone is the best way to capture and convey evidence and insights. Communicating evaluation findings in a concise, comprehensible, and meaningful way is a challenge. We need both literal and visual thinking to make use of evaluation by summing up findings in a more visual way through the use of graphics, drawings, and multimedia. For example, the UN WFP in the Asia Pacific region is combining evaluation with visual facilitation through a methodology called EvaluVision. It is helpful to involve people who might have fantastic learning, analytical, and communication skills, but who are not necessarily report writers.

      However, the challenge is that visuals are often seen as "nice" and “cool''. Everyone likes them and feels they are useful, but a normal report still has to be developed, because this is what evaluation commissioners including funders want. 

      A paradigm shift  in making recommendations

      Often, there are gaps between findings, conclusions, and recommendations in the evaluation report which can negatively affect the quality and use of evaluations. Traditionally, evaluators would proceed to make recommendations from the conclusions, however, letting the project implementation team to bring on board a policy-maker to jointly draft actionable recommendations can help improve evaluation use. The evaluator's role is to make sure all important findings or results are translated into actionable recommendations by supporting the project implementation team and policy-maker to remain as close to the evaluation evidence and insights as possible. This can be achieved by asking questions that help to get to actionable recommendations and also ensuring logical flow and empirical linkages of each recommendation with evaluation results. The aim should be for the users of the evaluation to own the recommendations while the evaluation team owns empirical results. With the participation of key stakeholders, evaluation results are relatively easy to sell to decision-makers. Stakeholder analysis is, therefore, key to understanding the interest, influence, and category of stakeholders to better support them to use evaluations.

      Lessons from audit practice: Can management feedback/response help?

      Should feedback be expected from users of evaluation? Typically, draft evaluation reports are shared with the implementers for review and approval. In the auditing field, there is mandatory feedback in a short time, from the client who must respond to the auditor's observations both positively and negatively. Perhaps, as mentioned elsewhere, working with the users of evidence generated through an evaluation in the form of findings and conclusions to make actionable recommendations may serve as a management feedback/response. However, the communication and relationship should be managed carefully so that evaluation is not perceived to be an audit work just like in some cases it is perceived to be “policing”.

      The Action Tracker

      An Action Tracker (in MS Excel or any other format) can be used to monitor over time how the recommendations are implemented. Simplifying the evaluation report in audience-friendly language and format such as a two-page policy brief, evaluation brief, and evaluation brochure based on specific themes that emerged from the evaluation is a practice relatively very helpful for a couple of reasons:

      • Evaluators are not the sole players, there are other stakeholders with better mastery of the programmatic realities.
      • The implementation team has got space to align their voices and knowledge with evaluation results.
      • The end of an evaluation is not, and should not be, an end of the evaluation, hence the need for institutions to track how recommendations from the evaluation are implemented for remedial actions, decision- or policy-making, using evaluation evidence in new interventions, etc.

      Alliance and relationship building for evidence use

      Typically, there are technical and political sub-groups or teams. In some situations, technical teams report to an administrative team that interfaces with the policy makers. Evaluators often work with the technical team, and may not get access to the other teams. The report and recommendation parts are trivial irrespective of the process followed. The issue of concern is the delay in the time between report submission and policy actions in developing countries. Institutionalization of the use of evidence is key to enhancing the use and influence of evaluations but may take time, particularly structural changes (top-down) approach. Having top management fully supporting evidence use it is a great opportunity not to miss out. However, small but sure steps to initiate changes from the bottom such as building small alliances and relationships for evidence use, gradually bringing on board more "influential" stakeholders, and highlight the benefits of evidence and how impactful it is for the implementing organization, decision-makers and the communities is also very helpful

      Real-Time Evaluations

      Evaluation needs to be rapid and timely in the age of pandemic and crisis situations. We need to 'communicate all the time'. One of the dimensions of data quality is timeliness. Timeliness reflects the length of time between data becoming available and the events or phenomena they describe. The notion of timeliness is assessed on the time period that permits the information to be of value and still acted upon. Evaluations should be timely for them to be of value and acted on.

      Beyond evidence use

      The ultimate reason for evaluation is to contribute to the social betterment or impact. This includes, but at the same time goes beyond the mere use of evaluation results that change policies or programs. In this way, the use of evaluation per se stops being evaluations’ final objective, since it aims at changes that promote improvements in people’s lives. Demonstrating how an evaluation contributes to socio-economic betterment or impact can enhance the value, influence, and use of evaluations.

      Gordon Wanzare