Alena [user:field_middlename] Lappo Voronetskaya

Alena Lappo Voronetskaya

Evaluation Officer (IAEA); Board Member (European Evaluation Society)

Alena Lappo Voronetskaya is an evaluation specialist with over eight years of experience in evaluation and research, most recently for the OECD, the World Bank IEG, FAO’s Independent Office of Evaluation and IFAD. Alena is a Board Member of the European Evaluation Society (EES) and the Board of Trustees of the International Organization for Cooperation in Evaluation (IOCE). In her current role, Alena is responsible for the assessments of quality, usage and impact of OECD products by Member States and the reconceptualization of the OECD’s Value for Money Initiative. 

My contributions

  • Evaluators are interpreters. What about ChatGPT?

    • Hi Silvia! 

      Thank you for the blog. Your review of the ChatGPT that highlights the strength and weaknesses of this software is greatly insightful. We shared the blog through the European Evaluation Society Newsletter today so that more members of evaluation community can participate in the discussion on technology and evaluation you raised through this blog.

      Referring to the question you posed on whether AI in general is smart enough and can make our evaluation work easier, my answer is yes if we as evaluation practitioners understand its application and limitations.

      Firstly, good evaluation always starts with the right evaluation questions and the methodology designed to answer these questions that take into account contextual factors, limitations, sensitivities, etc. I understand, that your example about coffee suggests that the value should have been placed on “time saving” and not on less “chores”. Methodologies such as Social Return on Investment (SROI) could go further in this example and place value, even financial, on the social activity of drinking coffee with friends beyond “time saving” if this is relevant to answer evaluation questions. This to say, “extra free time” and enriched “social life” should be of interest for the evaluation before the decision on whether to use AI technology for data collection/analysis and the search of the most appropriate software package.

      Secondly, it is important to understand the limitations of applying AI and innovative technologies in our evaluation practice. To provide more insights into the implications of new and emerging technologies in evaluation, we discuss these subjects in the EvalEdge Podcast I co-host with my EES colleagues. The first episodes of the podcast focus on the limitations of big data and ways to overcome them.

      Thirdly, the iterative process of interaction of evaluator and technology is important. As a good practice, data collected through innovative data collection tools triangulated with data collected through other sources. Machine learning algorithms applied, for example, to text analytics as discussed in one of the EES webinars on “Emerging Data Landscapes in M&E” need to be “trained” by human to code documents and to recognise the desired patterns.

      Best regards,


    • Dear Members,

      Following up on this timely thread on how to adapt our evaluations in the time of Covid-19, I am sharing the fresh blog from WB colleagues and experts on evaluation methods. The blog provides a decision tree "Making Choices about Evaluation Design in times of COVID-19" and some practical examples. 


    • Dear colleagues,

      Thank you very much to Nick for starting this very relevant topic and discussion.

      I particularly would like to stress the ethical responsibility we carry as evaluators mentioned by Carlos. Some countries might not have major restriction in place yet. In fact, it would be legal for the local team to conduct focus groups and face-to-face interviews. However, it is up to an evaluator to decide whether it is ethical. This might imply that even local consultants would need to conduct data collection through online engagement tools. It had recently happened to my colleague managing an evaluation in Indonesia and Brazil where the team decided to avoid face-to-face data collection by consultants in both countries as they deemed it unethical. 

      As so much remains unknown about Covid-19, any decision we make with regards to our current and future evaluations will be based on imperfect data. Science presents different scenarios but some of them suggest that it might take up to 1,5 years for the health situation to stabilise. This health emergency might be a good opportunity to learn how to design a methodology for a credible evaluation at a distance. 

      On 1 April, our colleagues from USAID are offering a free webinar "Discussion on Challenges and Strategies for M&E in the Time of COVID-19". Interested members could register here:

      Best regards,