Valeria [user:field_middlename] Pesce

Valeria Pesce

Information management specialist / Partnership facilitator
Food and Agriculture Organization of the United Nations / Global Forum on Agricultural Research and Innovation (GFAR)

I am currently partnership facilitator and digital innovation adviser at the Global Forum on Agricultural Research and Innovation (GFAR), coordinating a collective action on inclusive digital agriculture. Previously, I worked for the Statistics Department of FAO in the FAO Data Lab and briefly collaborated with CTA and the World Bank on agricultural data policy issues. Before that, I had already worked extensively with FAO and GFAR as project manager and convener, representing them in EC-funded projects on data infrastructures (agINFRA, Big Data Europe) and managing open data platforms in coordination with other global and regional actors, and later convening workshops and webinars on the issue of farmers' data rights and farmer-centric digital agriculture.

My only experience as evaluator is with proposals for the EC Horizon Europe programme. I have worked on standard M&E reporting and on a couple of occasions on the design of logframes for individual projects. At the moment, I'm participating in the exercise of revising the Theory of Change for GFAR, setting up a logframe and selecting the most suitable IT tools for monitoring, with a focus on quality indicators and narrative analysis.

My contributions

    • Thank you for keeping the forum open longer than planned. I was reading all the comments with much interest, not daring to contribute both because I'm new to the Eval Forward community and because I'm not an experienced evaluator, especially of science, but more of a general project-level M&E / MEL practitioner.

      I'm posting something only now at the last minute in reply to question 3 on MEL practices, and specifically regarding the measurement of impact (which has come up a lot in other posts, thanks Claudio Proietti for introducing ImpresS), where qualitative indicators are often based on either interviews or reports, and making sense of the narrative is not easy.

      Not sure if, in terms of practices, IT tools are of interest, but I think in this type of measurement some IT tools can help a lot. Of course the quality of the evaluation depends on the way narrative questions are designed 
      and the type of analysis that is foreseen (classifications, keywords, structure of the story, metadata), but once the design is done, it is very handy to use tools that allow you to (sometimes automatically) classify against selected concepts, identify patterns, word / concept frequency, clusters of concepts etc., using text mining and Machine Learning techniques, in some cases even starting directly from video and audio files.

      A few tools for narrative analysis I'm looking into are: ATLAS.ti, MAXQDA, NVivo. Other tools I'm checking, which do less powerful narrative analysis but also have design and collection functionalities, are 
      Cynefin Sensemaker and Sprockler. An interesting tool, with more basic functionalities but a strong conceptual backbone, helping with the design of the narrative inquiry and supporting a participatory analysis process, is NarraFirma.

      (O.T.: I would actually be interested in exchanging views on these tools with other members of the community who've used them.)