Thank you for all the responses and interesting and insightful inputs.
What motivated me to write this post was the impression that most lessons learned identified in evaluation reports are either not lessons or are poorly formulated and rarely used; in other words, evaluators are generating (but are we ?) knowledge that is not really serving to any purpose. Also, I had the impression that behind this issue was the lack of a shared understanding of the concept and of the processes to identify and capture these specific types of evidence.
So, how do we how do we identify real and useful lessons learned?
I will try to summarize the key-points raised:
1. The diversity of responses makes it clear that as evaluators we still do not have a shared understanding of what lessons learned are. Many times, the lessons seem to be there just to check a box on the reports’ requirements.
2. What are key-elements of lessons? Lessons should:
- be formulated based on experience and on evidence, and on change that affects people’s lives;
- be observable (or have been observed);
- reflect the perspective of different stakeholders (therefore, observed from different angles);
- reflect challenges faced by stakeholders in different positions (i.e. also donors may have something to learn!);
- be something new [that represents] valuable knowledge and/ or way of doing things;
-reflect what went well and also what did not went well;
- be able to improve intervention;
- specific and actionable (while adaptable to the context) so that they can be put in practice.
I really like the approach of ALNAP that synthesizes lessons that should be learned. From my perspective a lesson is only really learned if you do it differently next time. Otherwise, it’s not yet learned! Which is why I tend to call the ones we identify during evaluations as simply “lessons”.
3. How do we collect lessons learned? The collection process should:
- be systematic and result from consultations with different stakeholders;
- include what (content), for whom (stakeholders) and how (context!!);
- be clear on who is the target public (e.g., operational staff, staff involved in strategic operations, policy makers etc)
- take into account power dynamics (donors, implementers, beneficiaries etc);
- consider the practical consequences (e.g. is it possible to adapt?);
- include operational systems/feedback mechanisms to ensure that the findings will be discussed and implemented when that is agreed;
- balance rigour and practicality.
4. My final question was: how do we (try to) guarantee that the lessons will be actually “learned” – i.e. used, put in practice? (here I am using the interesting concept of ALNAP, that lessons are formulated to be learned). Some tips shared were:
-associate strategies for putting in practice, including incentives;
- lessons should inform or be aligned with the recommendations;
- recommendations should be reflected in the "management" response; and
- management should be accountable for the implementation.
It’s good to see other organizations and colleagues are interested and that there are some resources available. I hope that this can help us improve our practices. I have compiled below approaches and tools recommended, and examples and resources shared.
Thank you!
Kind regards,
Emilia
***
Approaches or tools recommended as effective to capture and process lessons learned:
Study paper by the UNEP evaluation office: “Lessons Learned from Evaluation: A Platform for Sharing Knowledge” co-authored by Catrina (2007) (link here)
The evaluation report of the “Projet de prestation de services participatifs de la Tunisie pour la reintegration” prepared for the Union Tunisienne de Solidarité Social with a good example of capturing lessons learned (in French): booklet utss.pdf (evalforward.org) . You can find the lessons learned from page 38 onwards.
ABC de la COVID-19. Prevención, Vigilancia y Atención de la Salud en las Comunidades Indígenas y Afromexicanas. (Bertha Dimas Huacuz. INPI. 2020), accesible at the webpage of the National Institute of Indigenous Peoples (INPI-MX): https://lnkd.in/gpv3wgu (book cover)/ https://lnkd.in/gG5wpVE (book text). Consolidates lessons derived from the pandemic for the development of regions and municipalities, compiled from various community development initiatives in Mexico – resource in Spanish.
A qualitative study on three different themes post earthquake 2015 in Nepal, led by World Vision International in Nepal on collaboration of 7 different international agencies working for earthquake response. DEC Collective Learning Initiative Report, Nepal.pdf
RE: How to define and identify lessons learned?
Dear all,
Thank you for all the responses and interesting and insightful inputs.
What motivated me to write this post was the impression that most lessons learned identified in evaluation reports are either not lessons or are poorly formulated and rarely used; in other words, evaluators are generating (but are we ?) knowledge that is not really serving to any purpose. Also, I had the impression that behind this issue was the lack of a shared understanding of the concept and of the processes to identify and capture these specific types of evidence.
So, how do we how do we identify real and useful lessons learned?
I will try to summarize the key-points raised:
1. The diversity of responses makes it clear that as evaluators we still do not have a shared understanding of what lessons learned are. Many times, the lessons seem to be there just to check a box on the reports’ requirements.
2. What are key-elements of lessons? Lessons should:
- be formulated based on experience and on evidence, and on change that affects people’s lives;
- be observable (or have been observed);
- reflect the perspective of different stakeholders (therefore, observed from different angles);
- reflect challenges faced by stakeholders in different positions (i.e. also donors may have something to learn!);
- be something new [that represents] valuable knowledge and/ or way of doing things;
-reflect what went well and also what did not went well;
- be able to improve intervention;
- specific and actionable (while adaptable to the context) so that they can be put in practice.
I really like the approach of ALNAP that synthesizes lessons that should be learned. From my perspective a lesson is only really learned if you do it differently next time. Otherwise, it’s not yet learned! Which is why I tend to call the ones we identify during evaluations as simply “lessons”.
3. How do we collect lessons learned? The collection process should:
- be systematic and result from consultations with different stakeholders;
- include what (content), for whom (stakeholders) and how (context!!);
- be clear on who is the target public (e.g., operational staff, staff involved in strategic operations, policy makers etc)
- take into account power dynamics (donors, implementers, beneficiaries etc);
- consider the practical consequences (e.g. is it possible to adapt?);
- include operational systems/feedback mechanisms to ensure that the findings will be discussed and implemented when that is agreed;
- balance rigour and practicality.
4. My final question was: how do we (try to) guarantee that the lessons will be actually “learned” – i.e. used, put in practice? (here I am using the interesting concept of ALNAP, that lessons are formulated to be learned). Some tips shared were:
-associate strategies for putting in practice, including incentives;
- lessons should inform or be aligned with the recommendations;
- recommendations should be reflected in the "management" response; and
- management should be accountable for the implementation.
It’s good to see other organizations and colleagues are interested and that there are some resources available. I hope that this can help us improve our practices. I have compiled below approaches and tools recommended, and examples and resources shared.
Thank you!
Kind regards,
Emilia
***
Approaches or tools recommended as effective to capture and process lessons learned:
Resources and examples shared: