Elias Akpé Kuassivi Segla

Elias Akpé Kuassivi Segla

Spécialiste en suivi-évaluation, chargé d'études senior
Présidence de la République du Bénin

Avec 19 ans d'expérience professionnelle, dont 13 ans de management, je suis spécialisé dans la planification, la gestion et l'évaluation des politiques publiques, domaine dans lequel je travaille depuis 2015. Mes compétences : Planification stratégique et opérationnelle - Analyse de données - Recherche qualitative - Gouvernance - Évaluation de programmes - Gestion de projet - Gestion des ressources humaines - Développement personnel.

En tant que chargé d'études senior au suivi des projets, programmes et réformes à la Présidence de la République, j'assure le suivi et prépare l'évaluation des interventions du gouvernement. À ce titre, j'assiste le comité de coordination et j'apporte un soutien aux unités de suivi des projets, programmes et réformes. J'apporte également un appui au contrôle et à la consolidation des rapports de suivi sectoriels, préparés pour le Conseil des ministres.

My contributions

    • Summary of the discussion

      Overall, it can be said that, depending on the contexts, the relevance of this combination can be questioned in order to better meet the needs of decision-making.

      Three key questions [addressed in the discussion]:

      1.       Do decision-makers use monitoring and statistical data or do they rely on evaluation?

      Evaluation takes time and its results take time. Few decision-makers have the time to wait for them. Before you know it, their term of office is up, or there is a government reshuffle. Some may no longer be in office by the time the evaluation results are published.

      Therefore, monitoring data is the primary tool for decision support. Indeed, decision-makers are willing to use monitoring data because it is readily available and simple to use, regardless of the methods by which it was generated. They use evaluative evidence less because it is not generated timely enough.

      Because monitoring is a process that provides regular information, it allows decisions to be made quickly. Good monitoring necessarily implies the success of a project or policy, as it allows for the rapid correction and rectification of an unforeseen situation or constraint. Moreover, the more reliable and relevant the information, the more effective the monitoring. In this respect, various government departments (central and decentralised, including projects) are involved in producing statistics or making estimates, sometimes with a great deal of difficulty and with errors in some countries.

      However, the statistics produced need to be properly analysed and interpreted in order to draw useful conclusions for decision making. This is where there are problems, as many managers believe that statistics and data are already an end in themselves. Yet statistics and monitoring data are only relevant and useful when they are of good quality, collected and analysed at the right time, and used to produce conclusions and lessons in relation to context and performance. This is important and necessary for the evaluation function.

      2.       What added value do decision-makers really see in evaluation?

      Evaluation requires more time, as it is backed up by research and analysis. It allows decision-makers to review strategies, correct actions and sometimes revise objectives if they are deemed too ambitious or unachievable once the policy is implemented.

      A robust evaluation must start from existing monitoring information (statistics, interpretations and conclusions). A challenge that evaluation (especially external or independent) always faces is the availability of limited time to generate conclusions and lessons, unlike monitoring which is permanently on the ground. And so in this situation, the availability of monitoring data is of paramount importance. And it is precisely in relation to this last aspect that evaluations struggle to find evidence to make relevant inferences about different aspects of the object being evaluated. The evaluation should not be blamed if the monitoring data and information are non-existent or of poor quality. On the other hand, one should blame an evaluation that draws conclusions on aspects that are lacking in evidence, including monitoring data and information. Therefore, the evolution of the evaluation should be parallel to the evolution of the monitoring.

      Furthermore, it is very important to value the data producers and to give them feedback on the use of the data in evaluation and decision-making. This is in order to give meaning to data collection and to make decision-makers aware of its importance.

      3.       How should evaluation evolve to be more responsive to the needs of decision-makers - for example, on reporting times?

      A call for innovative action: real-time, rapid but rigorous evaluations, if we really want evaluative evidence to be used by decision-makers.

      When evaluation is based on evidence and triangulated sources, its findings and lessons are very well taken into account by policy makers, when they are properly briefed and informed, because they see it as a more comprehensive approach.

      4.       Some challenges of monitoring and evaluation

      In developing countries, the main constraint of monitoring and evaluation is the flow of information from the local to the central level and its consolidation. Indeed, information is often unreliable, which inevitably has an impact on the decisions that are taken.

      Unfortunately, the monitoring and evaluation system does not receive adequate funding for its operation and this impacts on the expected results of programmes and projects.

      However, the problem is much more serious in public programmes and projects than in donor-funded projects. Most of the time, donor-funded projects have a successful track record, with a minimum of 90% implementation and achievement of performance objectives thanks to an appropriate monitoring and evaluation system (recruitment of professionals, funding of the system, etc.). But after the donors leave, there is no continuity, due to the lack of a strategy for the sustainability of interventions and the monitoring and evaluation system (actors and tools). The reasons for this are often (i) the lack of transition between project M&E specialists and state actors; (ii) the lack of human resources or qualified specialists in public structures, but also (iii) the lack of policy or commitment from the state to finance this mechanism after the project cycle.

      5.       Questions in perspective

      • How can M&E activities be resourced to function (conducting surveys, collecting and processing data, etc.)?
      • What are the most effective ways to carry out monitoring in an inaccessible area, such as a conflict zone? 
      • How can we get decision-makers to finance the sustainability of interventions (including the monitoring and evaluation budget) for the benefit of communities, and how can we raise the level of expertise of government agents in monitoring and evaluation and then maintain them in the public sector?

      6.       Approaches to solutions

      The availability of resources for the functionality of monitoring-evaluation mechanisms must be examined at two levels: firstly at the level of human resources and secondly at the level of financial resources.

      At the level of human resources, projects and programmes should already begin to integrate the transfer of monitoring-evaluation skills to beneficiaries to ensure the continuity of the exercise at the end of the project.

      With regard to financial resources, in the budget planning of the components, it is always necessary to introduce a provisional line for the transversal components in order to have availability for their functionalities. Today, this line is included in several donor frameworks.

      One option for reducing costs is to rely as much as possible on users (farmers, fishermen, etc.) to collect data (instead of using only "professional" surveyors).

      7.       Conclusion

      Monitoring and evaluation are both important (in the sense that they are two different approaches that do not replace each other) and monitoring is important for evaluation (better evaluations are made with a good monitoring system). So they are two different but complementary approaches.

      Good monitoring data is the basis for good evaluation. They complement each other, monitoring provides clarity on the progress of implementation and adjustments, if any, in implementation and with an evaluation, in addition to validating the monitoring data, it is possible to give them meaning and explanation, timely information for decision makers.

      Monitoring data enables the design of subsequent actions and policies. With evaluation, the design is adjusted and the monitoring can also be improved, because as an evaluator I try to provide recommendations to improve the monitoring.

  • If we wish to attract and maintain the interest of more stakeholders in evaluation, we need to nurture it with open participatory processes and frequent evaluation exercises, including self- and internal evaluation. This will help to strengthen understanding of evaluation and advocate for the use of evaluative evidence. It is not contrary to the notion of independent and external evaluation to ensure rigorous methodology; the two can go hand-in-hand to help advance the cause and practice of evaluation in a country”.[1]

    I would like to subscribe to this conclusion and welcome the credibility

    • Dear community,

      Thanks to Malika for bringing up this issue of rapid evaluation, which shows once again the difficulties we often encounter in the practical implementation of certain theoretical notions.

      My point of view on the issue is that of an institutional actor and not that of a consultant. In Benin we have started to conduct rapid evaluations, a new concept to which we have been exposed in South Africa with the "Twende Mbele" Programme (a cooperation programme in evaluation that we have initiated with South Africa and Uganda to strengthen our national monitoring-evaluation systems through the sharing of experience and the development of collaborative tools).

      It is within the framework of this programme that we have developed a specific methodological guide on this type of evaluation and have simultaneously undertaken 4 rapid evaluations, 3 of which concern public interventions and the 4th relates to the effects of COVID-19 on the informal sector.

      First of all, it must be said that the major difference between rapid and traditional evaluation lies in the constraints of time and limited resources that characterize rapid evaluation. In Benin, for example, a normal evaluation (excluding impact assessments which can take up to 5 years) takes on average 9 months to 1 year, or even longer, due to many factors related to procedures (notably administration, procurement, institutional management especially when there are many stakeholders), and sometimes even due to the data collection and analysis phase, which is often lengthy. Rapid evaluation therefore calls for new processes to reduce the time of the pre- and post-data collection phases. With the adaptation we made in Benin, the overall time for a rapid evaluation was estimated at 12 weeks maximum. This implies a data collection period of a maximum of two weeks to a month, to allow time for initial activities, organising data collection, analysing results, writing a draft report, obtaining observations and finalising the report. However, it is a reality check to see if this is realistic.

      In terms of tools, the gap in time and scope can be filled by using rapid methods:

      - Collection with groups (rather than individuals), workshops,

      - Use of routine data or other evaluations,

      - Team work to carry out different steps at the same time (Collection and processing)

      Furthermore, we believe that in order to effectively save time, it is preferable for the rapid evaluation to be carried out by an internal team, because this is the only option that does not require a procurement procedure. But it does require that appropriate organizational mechanisms are put in place, for example:

      - a project organisation chart for the team,

      - the organisation of the weekly working time to be devoted to the evaluation mission and strict delivery deadlines,

      - support measures for the team, etc.

      In addition, as the evaluation team did not reside in the communities, we identified focal points in the data collection settings to facilitate with community members in order to save time.

      Malika was not specific enough to enable us to propose solution approaches adapted to her context. But from the little I have retained, here are a few ideas that I share for the moment. I could provide more factual elements once we have learned from the experience currently underway at the Benin Public Policy Evaluation Office.

      Thank you and good luck to all of you.

      Elias SEGLA 

      [the original contribution is available on the French page]





    • Hello to all the community,

      I fully believe that monitoring and evaluation are two distinct functions that must complement each other harmoniously. The first contributes to feeding the latter with reliable and quality data, and the second, by qualitative analysis of the secondary data provided by the former, contributes to the improvement of their interpretation. Thus, monitoring and evaluation provide evidence for informed decision-making.

      It is true that for a long time the two functions were confused under the term "Monitoring&Evaluaion" terminology through which the evaluation was obscured for the benefit only of the monitoring activity. It would seem, therefore, that the evaluation is trying to take revenge over monitoring in recent years, with its institutionalization under the impetus of a leadership that has not yet achieved the necessary alchemy between the two inseparable functions.

      For example, the case of Benin, of which I would like to share with you here some elements of the results of the evaluation of the implementation of the National Evaluation Policy (PNE) 2012-2021, a policy that aimed to create synergy between stakeholders in order to build an effective national evaluation system through the Institutional Framework for Public Policy Evaluation (CIEPP). The National Evaluation Policy distinguishes between the two functions by stating:

      “Evaluation [...] is based on data from monitoring activities as well as information obtained from other sources. As such, evaluation is complementary to the monitoring function and is specifically different from the control functions assigned to other state structures and institutions. [...] The monitoring function is carried out in the Ministries by the enforcement structures under the coordination of the Directorates of Programming and Prospective and the Monitoring-Evaluation units. These structures are responsible for working with the Office for the Evaluation of Public Policy and other evaluation structures to provide all the statistical data and information and insights needed for evaluations.

      Organizational measures planned for in pages 32 and 33 of the attached National Evaluation Policy document were therefore taken, measures that clearly reveal the ambition to bring the two functions into symbiosis by creating the synergy between stakeholders necessary for the harmonious conduct of participatory evaluations.

      On the test of the facts, the results-based management movement and budgetary reforms in Benin have induced the culture of monitoring and evaluation in public administration. But is this culture reinforced with the implementation of the National Evaluation Policy, and more generally by the institutionalization of evaluation?

      The evaluation regime in departments today shows that the implementation of the National Evaluation Policy has not had a significant impact on improved evaluative practices. The programming and funding of evaluation activities, the definition and use of monitoring-evaluation tools (inherently monitoring), commissions for evaluations of sectoral programs or projects are the factors that the field data were able to analyze. As a result, departments are less focused on evaluation activities than on monitoring and evaluation.

      Resources allocated to evaluation activities in departments have remained relatively stable and generally do not exceed 1.5% of the total budget allocated to the ministry. This reflects the low capacity of departments to prioritize evaluation activities. Under these conditions, it cannot be expected that evaluative practices will develop a great way. This is corroborated by the execution rate of programmed monitoring and evaluation activities, which is often in the order of 65%. Added to this is the fact that the activities carried out are predominantly related to monitoring. Evaluations of projects or programs are rare. Sometimes even the few evaluations carried out in some departments are carried out at the behest of the technical and financial partners who make it a requirement.

      However, since the adoption in the Council of Ministers of the National Methodological Assessment Guide, there has been an increase in evaluation activities in departmental annual work plans, particularly on the theory of change and the programming of some evaluations. These results already show the existing dynamics in departments.

      In addition, few departments have a regularly updated, reliable monitoring and evaluation database. The development of technological infrastructure to support the information system, the communication and dissemination of evaluation results at the departmental level reflect the state of development of evaluative practices as presented above.

      In the end, the state of development of evaluative practice at the departmental level is justified by the lack of an operational evaluation program. As a result, the National Evaluation Policy has not been able to have a substantial effect on evaluative culture in departments in the absence of this operationalization tool, the three-year evaluation program.

      When we go down to the level of the municipalities, the situation is even more serious, because the level of development of monitoring and evaluation activities (inherently monitoring) is very unsatisfactory. The evaluation provided very specific data on that. I am happy to share the evaluation report if you are interested.

      All this allows me to answer clearly the 4 questions of Mustapha to say:

      1. Evaluation and monitoring are complementary practices and both necessary for the assessment and correction of the performance of a development action, as I said from the first paragraph.
      2. Evaluation values and capitalizes on tracking data, if and only if the monitoring practice is well structured and its data production system is well managed. The case of Benin, which I have just described briefly, shows that in the field of monitoring, there is still a great deal of work to be done to ensure that these two functions can be properly aligned in order to be a real decision-making tool together.
      3. I believe that leadership at all levels of the national monitoring system needs to be strengthened, namely:
      • at the individual level: ministers and other top management decision-makers, senior and middle managers, (project managers, monitoring-assessment actors, etc.);
      • at the organisational level: structures (directions and monitoring-assessment units)
      • at the process level: training workshops, seminars, journal activities, data collection, etc.

      There is also a need to strengthen:

      • the technical capabilities/professionalism/skills of the players,
      • networks of partnerships that allow people to learn and capitalize on each other's experiences (the EvalForward forum is a good example),
      • reviewing development planning and the choice of development indicators, including the internal coherence between the various national planning documents, and the consistency of these documents with international development agendas (domestication of the SDGs or their alignment with national development plans is a clear example).
      1. At the institutional level, the attention paid to monitoring and evaluation must be equal to 50 - 50. The two functions are inseparable and contribute equally to the same goal: the production of evidence for informed decision-making.

      Thank you all.

      [this is a translation of the original comment in French]

    • Très sincères merci à vous, cher Mustafa,

      J'espère qu'on aura l'occasion de se voir bientôt. J'en profite alors pour partager ci-joint le rapport d'évaluation de la PNE, qui met en évidence plus de détails et de données factuelles sur la situation actuelle du Bénin en matière de pratique évaluative.

      Salutations cordiales à toute la communauté.

      Elias A. K. SEGLA

      Evaluation specialist

      Presidency, Republic of Benin
      Bureau of public policy evaluation and government action analysis
      08 BP 1165 Cotonou - Benin

    • In Benin, we have not yet addressed the evaluation of capacity development.

      Capacity development is something we are working on. Currently, we are in partnership with the Center for Sociology Studies and Political Science of the University of Abomey-Calavi (CESPo-UAC), to develop a Certificate in Analysis and Evaluation of Public Policies. This is a 3-week certifying course for evaluation actors who wish to strengthen their capacities in this area. In addition, the Journées Beninoise de l’Evaluation are an opportunity for us to train (in one day) government actors, NGOs and local authorities on different themes. Apart from that, we organize training seminars for these same actors every three to five days. This year, for example, we will train (in 5 days) the managers of the planning and monitoring-evaluation services of the 77 communes of Benin, on the elaboration or the reconstitution of the theory of change of the Communal Development Plans (their documents of strategic planning). We did it the year before for the actors of the Government and NGOs.

      But we have never undertaken to evaluate these capacity developments. We will get there gradually.

  • Gender and evaluation of food security

    • Good evening dear members,

      Georgette's concerns in Burkina Faso are extremely relevant. We have not yet found the answer in Benin, but we have initiated an activity that we hope will help us begin to provide relevant answers. This is the assessment of the sensitivity of the national system of monitoring and evaluation in relation to gender. This study was done in Benin, South Africa and Uganda. The diagnosis focused on national evaluation policies and the national monitoring and evaluation systems of the three countries. The results of the study allowed us to adopt a plan of improvement actions including, among others, the definition of national indicators by sector to evaluate the gender aspect, as well as the revision of our national policy of evaluation, to incorporate norms and standards to take gender into account in all our evaluations.

      This certainly does not answer Georgette's questions, but it is to show at least that the concern is shared.

      Best regards

      Elias A. K. SEGLA

      Spécialiste en Gouvernance et Management public Présidence de la République du Bénin Bureau de l’Évaluation des Politiques Publiques et de l'Analyse de l'Action Gouvernementale Palais de la Marina

      01 BP 2028 Cotonou - Bénin