Can we settle for evaluation alone to ensure that the SDGs are achieved?
In recent years, evaluation has become both an essential discipline and a practice to assess the achievement of development goals around the world: an academic discipline firmly rooted in the best universities; a fairly well-established practice among development practitioners, in particular because of the great support on the part of major donors, including the World Bank.
The institutionalization of evaluation in the development sphere has been encouraged since the launch of the first specialized evaluation training courses, such as IPDET or PIFED. Paradoxically, national systems and practice of monitoring development actions have been the subject of much criticism among development practitioners, particularly towards the end of the Millennium Development Goals (MDGs) period: the gap had widened between the practice of evaluation and the practice of monitoring development actions.
The launch by the United Nations of the Sustainable Development Goals (SDGs) highlighted the need to improve and promote national monitoring-evaluation systems, particularly in less developed countries.
A few years later, where are we? It would appear that the practice of evaluation has developed well with regard to development actions financed by donors, in particular because of the conditionalities imposed, but that the same is not true for national systems for monitoring development actions. I am concerned then that a certain "ideology" is circulating among experts and practitioners of evaluation suggesting that evaluation could be self-sufficient or easily replace the practice of monitoring.
Some questions are worth asking in this regard:
- Are evaluation and monitoring mutually exclusive practices, or are they complementary and both necessary for assessing and correcting the performance of a development action?
- If monitoring is of any importance for the assessment and correction of development action, what makes it not receiving as much attention today as evaluation?
- Although efforts have been made in some less developed countries to improve national monitoring systems and monitoring practice, this remains insufficient and sometimes too much related to donor-imposed conditionalities. So what needs to be done to upgrade national monitoring systems and promote the practice of monitoring development actions?
- If we are to continue to talk about monitoring and evaluation at the institutional level, to what extent should our discourse and attention be about monitoring and evaluation, not just evaluation?
Thank you for your reactions,
Mustapha Malki
Elias Akpé Kuassivi Segla
Spécialiste en suivi-évaluation, chargé d'études senior Présidence de la République du BéninTrès sincères merci à vous, cher Mustafa,
J'espère qu'on aura l'occasion de se voir bientôt. J'en profite alors pour partager ci-joint le rapport d'évaluation de la PNE, qui met en évidence plus de détails et de données factuelles sur la situation actuelle du Bénin en matière de pratique évaluative.
Salutations cordiales à toute la communauté.
Elias A. K. SEGLA
Evaluation specialist
Presidency, Republic of Benin
Bureau of public policy evaluation and government action analysis
08 BP 1165 Cotonou - Benin
Mustapha Malki
Independent consultantWell done Elias for this excellent contribution that I would have not been able to produce.
You summarize the situation perfectly and recommend exactly what to do. If some of the last contributions were somewhat prescriptive or even theoretical, yours is inspired by a practical experience. And that's what we're looking for as members in this kind of platform of interaction and exchange.
I had the opportunity to visit Benin in June 2014 as a World Bank consultant to support the national team implementing a 5-year community development program that began as Phase II of a similar project. I was surprised by the Government's efforts in its ambition to institutionalize Monitoring and Evaluation in all sectors; it was still the first years of the implementation of the 2012-2021 government policy that you mention in your contribution (at least, I imagine). I had the opportunity to visit several government ministries and found the existence of an M&E service that collated several data from the sector. During this period, not all the means were yet available but 6 years later, and to read to you now, I understand that we have in our hands a rather interesting experience that could inspire several countries, especially African but others as well, in order to become more practical in our recommendations and get our exchanges out from the normative and the theoretical.
Congratulations once again for this contribution and good luck for Benin.
Elias Akpé Kuassivi Segla
Spécialiste en suivi-évaluation, chargé d'études senior Présidence de la République du BéninHello to all the community,
I fully believe that monitoring and evaluation are two distinct functions that must complement each other harmoniously. The first contributes to feeding the latter with reliable and quality data, and the second, by qualitative analysis of the secondary data provided by the former, contributes to the improvement of their interpretation. Thus, monitoring and evaluation provide evidence for informed decision-making.
It is true that for a long time the two functions were confused under the term "Monitoring&Evaluaion" terminology through which the evaluation was obscured for the benefit only of the monitoring activity. It would seem, therefore, that the evaluation is trying to take revenge over monitoring in recent years, with its institutionalization under the impetus of a leadership that has not yet achieved the necessary alchemy between the two inseparable functions.
For example, the case of Benin, of which I would like to share with you here some elements of the results of the evaluation of the implementation of the National Evaluation Policy (PNE) 2012-2021, a policy that aimed to create synergy between stakeholders in order to build an effective national evaluation system through the Institutional Framework for Public Policy Evaluation (CIEPP). The National Evaluation Policy distinguishes between the two functions by stating:
“Evaluation [...] is based on data from monitoring activities as well as information obtained from other sources. As such, evaluation is complementary to the monitoring function and is specifically different from the control functions assigned to other state structures and institutions. [...] The monitoring function is carried out in the Ministries by the enforcement structures under the coordination of the Directorates of Programming and Prospective and the Monitoring-Evaluation units. These structures are responsible for working with the Office for the Evaluation of Public Policy and other evaluation structures to provide all the statistical data and information and insights needed for evaluations.
Organizational measures planned for in pages 32 and 33 of the attached National Evaluation Policy document were therefore taken, measures that clearly reveal the ambition to bring the two functions into symbiosis by creating the synergy between stakeholders necessary for the harmonious conduct of participatory evaluations.
On the test of the facts, the results-based management movement and budgetary reforms in Benin have induced the culture of monitoring and evaluation in public administration. But is this culture reinforced with the implementation of the National Evaluation Policy, and more generally by the institutionalization of evaluation?
The evaluation regime in departments today shows that the implementation of the National Evaluation Policy has not had a significant impact on improved evaluative practices. The programming and funding of evaluation activities, the definition and use of monitoring-evaluation tools (inherently monitoring), commissions for evaluations of sectoral programs or projects are the factors that the field data were able to analyze. As a result, departments are less focused on evaluation activities than on monitoring and evaluation.
Resources allocated to evaluation activities in departments have remained relatively stable and generally do not exceed 1.5% of the total budget allocated to the ministry. This reflects the low capacity of departments to prioritize evaluation activities. Under these conditions, it cannot be expected that evaluative practices will develop a great way. This is corroborated by the execution rate of programmed monitoring and evaluation activities, which is often in the order of 65%. Added to this is the fact that the activities carried out are predominantly related to monitoring. Evaluations of projects or programs are rare. Sometimes even the few evaluations carried out in some departments are carried out at the behest of the technical and financial partners who make it a requirement.
However, since the adoption in the Council of Ministers of the National Methodological Assessment Guide, there has been an increase in evaluation activities in departmental annual work plans, particularly on the theory of change and the programming of some evaluations. These results already show the existing dynamics in departments.
In addition, few departments have a regularly updated, reliable monitoring and evaluation database. The development of technological infrastructure to support the information system, the communication and dissemination of evaluation results at the departmental level reflect the state of development of evaluative practices as presented above.
In the end, the state of development of evaluative practice at the departmental level is justified by the lack of an operational evaluation program. As a result, the National Evaluation Policy has not been able to have a substantial effect on evaluative culture in departments in the absence of this operationalization tool, the three-year evaluation program.
When we go down to the level of the municipalities, the situation is even more serious, because the level of development of monitoring and evaluation activities (inherently monitoring) is very unsatisfactory. The evaluation provided very specific data on that. I am happy to share the evaluation report if you are interested.
All this allows me to answer clearly the 4 questions of Mustapha to say:
There is also a need to strengthen:
Thank you all.
[this is a translation of the original comment in French]
Lal - Manavado
Consultant Independent analyst/synthesistYes, monitoring is important in evaluation, but it is necessary to understand that unless one has decided in advance what exactly one is going to monitor, it serves no useful purpose. This 'what' is determined by achievement of what result one is going to ascertain. It is easy to overlook this vital logical fact, and often this happens. Thus, monitoring is logically subsumed by the evaluation for which it is intended.
Best wishes!
Lal Manavado.
Eddah Kanini (Board member: AfrEA, AGDEN & MEPAK
Monitoring, Evaluation and Gender Consultant/TrainerThe practice of Monitoring cannot be replaced by evaluations but should feed into evaluations. Monitoring should go hand in hand with implementation of the development programs, projects and interventions. It is through monitoring that the inputs, outputs and processes applied are checked. As well as the important concept of timeliness as the programs awaits the evaluation to measure and draw important conclusions.
Therefore as a monitoring and evaluation consultant, I would say that both the M and the E are critical and complete each other.
Bintou Nimaga
consultantThank you for launching this topic, which provides an opportunity to clarify these two aspects: monitoring and evaluation which are very often the subject of discussion in development initiatives.
Monitoring and evaluation are two elements that complement each other to better guide the project/program towards the results and objectives of development. However, one cannot cancel the other as a good monitoring is an assurance for a good evaluation.
The formulation of indicators is an important step in that it targets the project evaluation monitoring device. Well formulated, these indicators also reflect the criteria of effectiveness, sustainability, efficiency, effects/impacts. Coherence and relevance, in particular to the needs and aspirations of the beneficiary communities, also involve the application of a participatory analysis mechanism to integrate the interests of all stakeholders. When carried out well in the field, coherence/relevance analysis is also an opportunity to integrate equality/equity aspects into the monitoring and evaluation indicators (project dashboard).
In my view, the results-based indicators (RBM) favour the establishment of operational and realistic monitoring. I have found that project tracking devices are generally focused on carrying out activities. This model is easier to achieve but gives less chance of directing planning towards the intended results and also does not make it easier for evaluators because the data produced is not sufficient to assess the analysis elements and compare them to the field results. Now, with increasingly reduced budgets, the duration and quality of evaluations suffer. This lack of assessment generally leads to differences of opinion between project managers and evaluators, as you will agree that an evaluator, regardless of his/her competence, will not have access to all the necessary elements of assessment for a good evaluation. Hence the need for an adequate, effective and integrated monitoring system.
I recommend that agricultural projects provide the means for good field analysis during the planning phase and that they pay more attention to the issue of indicators and monitoring mechanism by integrating the evaluation criteria. Also, periodic synthesis and analysis of monitoring data is important and provides technical opportunities to better correct project inadequacies prior to evaluations.
Freddy Guigonou Pierre PADONOU
Consultant Cosinus ConseilsHello dear contributors and thank you for the exciting debate about the importance of monitoring and evaluation functions in the implementation of national projects/programs contributing to the achievement of the SDGs.
First, it should be remembered that the monitoring and evaluation functions all come during the active phase of the projects/programs, thus after the planning phase and thus the establishment of the logical framework for development. However, while complementary, the two functions are of distinct importance.
Indeed:
It is noted through these two definitions that the monitoring function is internal to the project/program/policy implementation organization, but the evaluation function can be internal and external (for the sake of objectivity and expertise for independent opinion).
Monitoring is an ongoing process and tends to focus on ongoing activities. Evaluations are conducted at specific times to examine how the activities have been conducted and what their effects have been. Data monitoring is generally used by managers for project/program implementation, product tracking, budget management, procedural compliance, etc. Evaluations may guide implementation (e.g., mid-term evaluation), but they are less frequent and instead examine significant changes (achievements) that require greater methodological rigour in the analysis, such as the impact and relevance of an intervention.
Ultimately, the distinction between monitoring and evaluation is that monitoring is the continuous analysis of project progress towards achieving expected results in order to improve decision-making and management; while evaluation assesses the efficiency, effectiveness, impact, relevance and sustainability of policies and implementation activities.
Returning to the emphasis placed on monitoring and evaluation by development partners, it is clear evaluation must be more important to them as they have a global understanding of the real changes induced by their intervention. Monitoring is undeniably linked to the management capacity of the project's beneficiary states. It is their responsibility to monitor the project well in order to improve management methods as needed to achieve the desired results. And I believe that for this function, even if sometimes the resources (human, material or financial) are insufficient, the development partners plan in advance a prize pool for this. Further, evaluation will corroborate the major deficiencies identified throughout the follow-up and having not found a significant solution before the end of the intervention to establish lessons (capitalization of achievements) to be applied in the implementation of other initiatives.
In terms of suggestion/recommendation, it can be suggested that:
Gninnakan Oumar SAKO
Expert en Planification stratégique, suivi-évaluationDear All,
I would like to make my modest contribution to the discussion revived by Dr. Mustapha Malki. My view is that the position that evaluation is better developed as a practice than the monitoring function should be put into perspective across countries and stages of development. Indeed, most African countries have medium-term strategy or development documents aligned with development agendas (Agenda 2063 and Agenda 2030) to better manage their development processes. These National Development Plans, which provide the framework for all interventions within these states, provide monitoring and evaluation mechanisms to better assess progress towards the various objectives. The operationalization of these monitoring and evaluation mechanisms allows the production of periodic reports to monitor the implementation of these national plans/development strategies. However, several reports or studies show that the practice of evaluation remains low in many African countries.
This could be explained by several reasons, including the high cost of evaluations, low interest in evaluations by policy makers, a lack of understanding of the scope of evaluations by many stakeholders, the inadequacy of the legal and regulatory framework on evaluation, and the lack of a national evaluation policy. These reasons tend to focus on monitoring activities, which allow us to assess the progress in implementing government and Technical and Financial Partners interventions at the national, local and sectoral levels. Thus, Technical and Financial Partners support the implementation of robust monitoring systems across the many countries, which are an important basis for conducting credible evaluations. Efforts will need to be made, however, to support the strengthening of national statistical systems to provide reliable statistical data to inform monitoring and evaluation systems. Efforts will also need to continue at the level of awareness within states to help a better understanding of the scope and value of evaluations. This could be done in support of the growing commitment of Technical and Financial Partners to evaluations of development policies, programmes and projects.
SAKO G. Oumar
Expert in Strategic Planning, Monitoring and Evaluation
Mustapha Malki
Independent consultantDear all,
It has been more than a month since we started this discussion on the mismatch between monitoring and evaluation, although these two functions have always been considered complementary and therefore inseparable. However, as a first reaction, I must express my surprise that only 4 contributions have been recorded for this theme. Why such a weak reaction from our group members?
Beyond this surprise, I have reviewed the 3 reactions that address specifically the issue of monitoring and evaluation practice and propose to relaunch the debate on this topic so that we can draw some recommendations. For the record, and in order to be clear in my recommendations, I will focus my intervention on the monitoring function to distinguish it from the evaluation practice in any monitoring-evaluation system because it seems to me that the term 'monitoring-evaluation' hides very poorly the existing mismatch between the two functions, as these do not receive the same attention both nationally and internationally.
As the first to respond, Natalia recommends that theories of change would be more useful if they were developed during the planning or formulation phase of the intervention and would serve as the foundation of the monitoring-evaluation system. This is the essence of the theory of monitoring and evaluation in what many specialized textbooks suggest.
She also suggests that evaluations could be more useful in terms of learning from the intervention if ToC and evaluation questions are fed from questions formulated by program teams after analysis of monitoring data.
But isn’t that what we are supposed to do? And if that's it, then why in general it's not how it is done?
In her contribution, Aurélie acknowledges that evaluation is better developed as a practice than her sister function of monitoring, perhaps since evaluations are done primarily when supported by dedicated external funding, thus linked to an external funder. This is in fact the general case that can easily be observed in the least developed countries. She also asks the question: why has the monitoring function not yet received the same interest from donors; why are monitoring systems not required as a priority, given the essential nature of this tool to learn from past actions and improve future actions on time? She even seems to give a bit of an answer by referring to a study: countries need to develop a general, results-based management culture, which begins, even before monitoring, with results-based planning. But she does not explain why it is not yet in place, despite the fact that it has been 4 years since the SDGs were launched. She concludes her contribution by acknowledging that in many institutions, both national and international, monitoring is still largely underestimated, under-invested and suggests that it is up to evaluators to play a role in supporting the emergence of the monitoring function, in their respective spheres of influence; even if it means putting aside the sacrosanct principle of independence for a time. But she does not show us how evaluators can succeed in bringing out this much-desired monitoring function where large donors and large capacity building programs have failed.
The third contribution comes from Diagne, which begins by recognizing that when developing a monitoring-assessment system, there is a greater focus on functions and tools rather than on the field - or scope - and the purpose of the system, taking into account the information needs of the funder and other stakeholders. He says that if the main purpose of a monitoring-assessment system is to accompany the implementation of an intervention in a sense of constant critical reflection in order to achieve the results assigned to this intervention and to alert on the critical conditions of its implementation, then a review - I would say personally, redesign - of the monitoring-evaluation system is necessary. And he concludes by stressing that development policies do not give enough importance to monitoring and evaluating the SDGs; they merely compile data from programmes implemented with foreign partners to express progress against a particular indicator, which is far from good practice with regard to monitoring and evaluation.
At least two contributions (Aurèlie and Diagne) recognize that a major overhaul of monitoring&evaluation is needed in the era of results-based management and all other results-based corollaries.
What we can note from all these contributions is that there is unanimity on the interest and importance of strengthening the complementarity between monitoring and evaluation as two functions that reinforce each other, but we do not know how to build a monitoring function worthy of the current practice of evaluation. As it is said, identifying the good causes of a problem is already half the solution to this problem. The major cause of the mismatch between monitoring and evaluation is that evaluation has been consolidated by funders and development partners because it addresses their concerns over the performance of the programs they fund or implement. On the other hand, monitoring is a more beneficial function for countries receiving development assistance and such a function does not yet seem to be important to the country's governments for several reasons. As there is very little external investment in the monitoring function at the national level, this fuels the mismatch between these two functions. So if there is anything that can be done to mitigate this inadequacy, then donors and development partners should be encouraged to invest in strengthening national monitoring and evaluation systems and conduct programmes in order to convince the governments of the recipient countries of development assistance of the interest and importance of strengthening national monitoring-evaluation systems.
Let us hope that this contribution will relaunch the debate on this topic...
Mustapha
Richard Tinsley
Professor Emeritus Colorado State UniversityI hope everyone is healthy and take good care of yourselves and families, particularly our colleagues working in Rome. Are you all able to work from home and keep up with the development efforts? It does provide a good chance for us emeritus people to draft responses to various forum of interest.
Regarding the interest in M&E, either individually or jointly, my concern is that the value of the exercise is only as good as the questions being asked, the way the data is tabulated, and the finances available to implement the M&E program, particularly within a host governments with limited tax base to support any services including development activities. Associated with this is the stated vs. underlying objective of the M&E effort. As I understand M&E it is designed to be an independent effort on behalf of the underwriting taxpayers to assure the development money is well invested and not wasted as well as guide future projects to more effectively address the objective and allow projects to effectively evolve to better serve the intended beneficiaries. It should not be a propaganda tool to promote projects as successful when by all normal business standards, they are complete failures. As I have listened to many of the USAID MEL (Monitoring, Evaluation, & Learning) webinar and reviewed numerous project reports I am left with the distinct impression the MEL effort is primary to deceive the American public and the elected members of Congress into thinking they are making major contributions to rural poverty alleviation of smallholder farmers, while most of the intended beneficiaries are avoiding active participation like it were the plague, or perhaps the current coronavirus. This may effectively attract continued funding but does nothing for beneficiaries other than keep them deeply entrenched in poverty. More likely M&E activities diverted to project propagandizing will have a substantial negative impact as it reinforces failed programs into future programs preventing them for evolving to better serve the beneficiaries.
Allow me to illustrate use poverty alleviation for smallholder farmers as reference beneficiaries:
Missed question: One question that has been overlooked for most of the last 50 years is the timing of agronomic activities starting with crop establishment. This gets to the limitations of Agronomy research which does an excellent job of determining the physical potential of an area but says nothing about the operational requirements to extend the small plot research across the rest of the farm or community. The assumption is that labor or contract mechanization is readily available, and the farmers only need to be “taught” the value of early planting. A valid M&E effort 50 years ago seeking information including simple field observations on the timing of agronomic operations would have noted that under manual operations crop establishment was spread over 8 weeks or more with rapidly declining yields with the delay to the point it is impossible for manual agriculture to meet basic family food security. If the M&E programs did pick that up the whole poverty alleviation effort would have shifted from badgering smallholders on the importance of early planting and focus on providing access to the operational requirements such as access to contract tillage that would make timing of crop establishment discretionary. It would also note to the extent the current emphasis is on value chain development as a means to promote additional production with the underlying assumption there is surplus operating capacity, is premature until after the operational capacity is increased so farmers could get their crops planted in a more timely manner, produce enough to meet family food security and still have ample production to justify the improved value chain.
Webpage References:
https://webdoc.agsci.colostate.edu/smallholderagriculture/OperationalFeasibility.pdf
https://smallholderagriculture.agsci.colostate.edu/integration-an-under-appreciated-component-of-technology-transfer/
https://smallholderagriculture.agsci.colostate.edu/most-effective-project-enhancing-access-to-contract-mechanization-via-reconditioned-used-tractors/
https://webdoc.agsci.colostate.edu/smallholderagriculture/BrinksDrudgery.pdf
The failure to identify the operational limits of smallholder farmers has a major impact on the current emphasis for quality nutrition. Here the underlying question is what are the calories needed to optimize the economic opportunities which are largely associated with heavy manual agronomic field work? This requires at least 4000 kcal/day but with the manual agriculture most farmers are limited to 2000 to 2500 kcal/day barely meeting the 2000 kcal/day basic metabolism, with only enough work calories for a couple diligent hours of manual labor. That may go a long way to explaining the 8-week crop establishment time. But it does mean that it will be difficult to accept a diversified diet if it requires less calories which in turn will reduce their economic opportunity including crop production. However, the nutrition M&E emphasis the impressive number of “beneficiaries” informed without looking at their compromises in being able to utilize the information and the affordability of the improved nutrition vs. available income. The whole concept of dietary requirement to meet economic opportunities seems completely lost to the nutrition improvement effort but certainly not the “beneficiaries” being badgered with nutritional information they cannot use. How often are our proposed agronomic interventions more labor intensive and thus an attempt to obligate hungry exhausted smallholders to exert energy will in excess of their available calories, possibility as much as twice their available calories. When doing so have we met the definition of genocide or come very close to doing such?
Webpage References:
https://webdoc.agsci.colostate.edu/smallholderagriculture/ECHO-Diet.pdf
https://smallholderagriculture.agsci.colostate.edu/ethiopia-diet-analysis/
https://webdoc.agsci.colostate.edu/smallholderagriculture/DietPoster.pdf
https://smallholderagriculture.agsci.colostate.edu/1028-2/
https://smallholderagriculture.agsci.colostate.edu/affordability-of-improved-nutrition-while-optimizing-economic-opportunities/
Data Manipulation: Even if quality M&E data is collected it could be tabulated either as a propaganda tool to promote programs even if they are basically a failure or as a guide the evolution of future programs. There basically two ways of tabulating M&E data. If you are interested in using M&E for propaganda promotions of the project regardless of how successful it is you simply report to aggregate totals of the data. With a large multi-nation with multi-programs within a nation it can easily provide some highly impressive number, often in the hundreds of thousands. This will appease the public perhaps assure future funding but be meaningless as evaluation tool. A more effective evaluation tool for guiding future project would the expressing the same data as a percent of the potential. Using my pet concern of the overemphasis on farmer organizations to funnel assistance to smallholder farmers. It is possible and the USAID MEL program routinely does, claim they are assisting several hundred thousand smallholder farmers, but a more detailed analysis would indicate rarely do they have more than 10% of the potential farmers, within the smallholder communities they are claiming to assist, actively participating and even then they will divert most of their business to alternative service providers. Thus, they may be assisting a few hundred thousand but have a few million potential beneficiaries avoiding the project. Then since most of the active members will side sell all but what is needed for loan repayments, the total market share from the community will be a trivial less than 5% and virtually no impact on the overall communities’ economy. Not what would be considered a success by most business standards!! What is urgently needed here is an upfront statement of what will constitute a minimal successful project in terms of percent of potential beneficiaries actively participating in the project and the market share funneled through the project. All expressed in percent of the community.
This target for a success project needs to be consistent with what the underlying taxpayers are expecting. In the case of farmer organizations, the expectation is over 60% participation and over 50% market share. Can anyone come close to this level of success particularly when projects are openly attempting to compete with private service providers?? A good sincere M&E program, either individual or together, should have realized this some 30 years ago and if so the development effort would have carefully looked at alternatives, including accepting that the much vilified private service providers were effective efficient business models that actually provide the farmers the financially best and most convenient business services. Also, it would not take a lot of effort to appreciate the farmer organization model is administratively too cumbersome to compete with private service providers. The overhead costs of operating a farmer organization substantially exceeded the private service profit margins so relying on farmer organization would drive the smallholder members deeper into poverty despite the massive rhetoric about poverty alleviation. Thus, the farmers wisely avoid the farmers organization so that they require continuous external support and facilitation and fully collapse once external support ends.
Webpage references:
https://smallholderagriculture.agsci.colostate.edu/perpetuating-cooperatives-deceptivedishonest-spin-reporting/
https://smallholderagriculture.agsci.colostate.edu/appeasement-reporting-in-development-projects-satisfying-donors-at-the-expense-of-beneficiaries/
https://smallholderagriculture.agsci.colostate.edu/relying-on-cooperatives-taxpayers-expectations/
https://smallholderagriculture.agsci.colostate.edu/loss-of-competitive-advantage-areas-of-concern/
https://smallholderagriculture.agsci.colostate.edu/farmers-organizations-and-cooperatives-is-there-a-competitive-adantage/
Financial Limits: The concern with financial limits is that host government are working with a limited tax base to provide support services for which the administrative nature of a national M&E program just cannot get the priority needed for quality data collection. The overall economic environment in most host countries is what I refer to as financially suppressed. This is an economy with such a high level of poverty that most families spend 80% of wages or production just to obtain a meager diet for their family even when the consumer prices are only 1/3rd to 1/5th the USA or EU prices. With that percent of income devoted to survival there just is not enough “discretionary” income to make an effective tax base for the government to raise funds for civil services. Sorry but no taxes no services. The result is that for all practical purposes the government is financially stalled. I think this is why, in previous comments, most of the M&E was done through NGOs with access to external funds. The problem is when you try and compel a government to undertake an administrative task that cannot be fully funded including the operational costs for the field trips necessary to collect reliable information, the quality and reliability of the task become questionable. With no funds but pressure to provide the data, civil officers will often simply complete the information as best they can according to what they perceive is taking place. It is the best they can do but could be far from reality. It should be noted that virtually 90% of all smallholder farmers in most host countries have never interacted with a civil officer including village agricultural extension officers. It is really better not to ask for a M&E program than have one completed by perception instead of facts.
Webpage References:
https://smallholderagriculture.agsci.colostate.edu/financially-suppressed-economy-2/
https://smallholderagriculture.agsci.colostate.edu/financially-stalled-governments/
https://smallholderagriculture.agsci.colostate.edu/impact-of-financially-stalled-government-limited-variety-improvement-seed-certification/
The bottom line of this comment is to be very careful and make certain any M&E program alone or combined is mechanism for representing the beneficiaries and advancing the development effort and does not become a means for embellishing programs that may be socially desirable but for which the beneficiaries are avoiding like the current virus. It will be very difficult to meet the SDG when the M&E activities are more promoting and instituting failed programs instead of guiding the evolution of programs to better serve the beneficiaries.
Webpage Reference:
https://smallholderagriculture.agsci.colostate.edu/monitoring-evaluation-the-voice-of-the-beneficiaries/
Diagne Bassirou
Responsable Suivi-EvaluationResponsable Suivi-Evaluation WACA- West Africa Coastal Areas Management programDear Moustapha and others,
Thank you for these contributions, which tell us a lot about the issues of Monitoring and Evaluation.
In most cases of definition of Monitoring and Evaluation Plans, the actors do not focus on the field and the object of the system, we are more concentrated on the functions and tools according to information needs sponsored by the donor or other stakeholders. If, however, the main purpose of monitoring and evaluation is to accompany the implementation in the direction of a permanent critical reflection to achieve the results and to alert on the critical conditions for the implementation of the intervention, we have a real challenge in the revision of monitoring and evaluation plans. For some, monitoring and evaluation are two complementary functions and for others it is a single monitoring-evaluation function with permanent monitoring activities and periodic and thematic evaluation activities following intervention guidelines. Today we are in the era of results-based management and results-based budgeting, hence the essential place of monitoring and evaluation for the success of programs or even the achievement of the SDGs. Programs invest more in achievements and communication and less in monitoring and evaluation, which requires a lot of resources conditioned by the employer of the intervention. Today with the digitization of systems we have less human resources but that does not prevent the cost of material resources remaining high for a good implementation. In most countries, especially developing countries, development policies do not give much importance to monitoring and evaluation of the SDGs. They compile data received from programs with partners to express progress, which is not a relevant resolution for the objective nature of monitoring and evaluation. In 2016 I proposed an ADMA (African Durable Measurement Agency) project to overcome this problem in order to solve the issue of relevance and inconsistency of interventions in the same locality and to note the interventions by their contribution to the achievement of SDG ... (see attachment).
Aurelie Larmoyer
Senior Evaluation officer WFPDear Mustapha,
Thank you for your post, which brings up many important topics indeed!
To only take-up a few, I would start by loudly asserting the view that monitoring and evaluating are by no means mutually exclusive and unquestionably complementary.
It may be that Evaluation has developed well as a practice, and more so than its sister function Monitoring. Still, a study we have done (on which we recently shared preliminary results here https://www.evalforward.org/blog/evaluation-agriculture ) did show that in many developing countries, evaluations are done mostly when supported by a dedicated external funding: an indication that the bigger sister is not yet that sustainably established…
Your post still does raise a big question which is a concern to me too: why has the Monitoring function not yet been the subject of the same donor interest; why are monitoring systems not a number one requirement of all donors, considering how essential it is as a tool to learn from past actions and timely improve future ones? As our study also revealed, before promoting evaluation, countries need to establish Results Based Management, which starts, even before monitoring, with planning for results.
It is a fact that in many institutions, from national to international levels, monitoring is still heavily underrated and underinvested. Maybe one way would be to start by identifying who has a stake in ensuring the ‘M’ fulfils its function of identifying what works and does not, why and under what circumstances? In this respect, we evaluators could take a role in supporting the emergence of this function within our respective spheres of influence; putting aside our sacred independence cap for a while… Would other evaluators agree?
All the best to all,
Aurelie
Natalia Kosheleva
Evaluation Consultant Process Consulting CompanyDear Mustapha, thanks for raising this important topic.
In my opinion monitoring and evaluation are complementary and both are necessary for assessing and correcting the performance of development interventions. The reason why they may seem to be mutually exclusive is that in most cases monitoring is fully embedded in the intervention management with specialists doing monitoring being part of the intervention team while evaluation is often positioned as external and independent and evaluation policies adopted by many major players in the development field include serious safeguards to ensure independence of the evaluation team.
To my knowledge in many less developed countries there is a growing number of M&E departments in national executive agencies, which may be interpreted as a sign that monitoring and evaluation are seen as complimentary. Still at present these M&E departments reportedly focus more on monitoring than evaluation and evaluation they do is often limited to comparing extent of achievement of targets for a set of pre-selected indicators.
I would agree that monitoring is not receiving much attention within evaluation community, but it is positioned as an integral part of Results-Based Management (RBM) and is a part of discussions within RBM community.
I also think that both monitoring and evaluation could benefit if we talked more about complementarity of the two practices. For example, in my experience theories of change, an instrument that emerged from the evaluation practice, are most useful when they are developed during the planning phase of the intervention and serve as the basis for development of its’ monitoring system. And evaluations could be more useful in terms of generating lessons from the intervention practice if evaluation ToRs and evaluation questions were informed by questions that intervention teams have when looking on their monitoring data.
When it comes to SDGs implementation, given the complexity of issues that countries and development partners have to tackle to achieve SDGs and hence the need to use innovative approaches and constant adaptation of interventions, I think that we should be talking about further integration between monitoring and evaluation so that intervention team can commission an evaluation when their monitoring data indicates that the intervention may be getting off track and use results of this evaluation to see if any adaptation is necessary.
Natalia Kosheleva
Evaluation consultant