Silva Ferretti is a freelance consultant with extensive international experience in both development and humanitarian work. She has been working with diverse organizations, committees, networks and consortia (e.g. Agire, ActionAid, CDAC, DEC, ECB project, Handicap International, HAP, Plan International, Save the Children, SPHERE, Unicef, WorldVision amongst many others).
Her work is mainly focused on looking at the quality of programs and on improving their accountability and responsiveness to the needs, capacities and aspirations of the affected populations.
Her work has included impact evaluations / documentation of programs; set up of toolkits, methodologies, standards, frameworks and guidelines; coaching, training and facilitation; field research and assessments.
Within all her work Silva emphasizes participatory approaches and learning. She has a solid academic background, and also collaborated with academic and research institutions in short workshops on a broad range of topics (including: innovations in impact evaluation, Disaster Risk Management, participatory methodologies, protection, communication with affected populations).
She emphasizes innovation in her work, such as the use of visuals and videos in gathering and presenting information.
Silva Ferretti
Freelance consultantStanding ovation, Musa. I completely agree with you.
Most projects today are already highly structured, incorporating all the elements you mentioned—to the point of straitjacketing them (but that’s a debate for another day!).
If substantial issues are still emerging despite these significant M&E investments, perhaps we need to re-evaluate our management and M&E approaches rather than introducing yet another external process.
Let me reiterate my point: resources spent on evaluability processes - that merely uncover gaps, which should already be apparent - are not the most effective use of our limited evaluation resources. A better investment would be in strengthening organizational capacity and localizing evaluative efforts, rather than continuously paying external consultants.
Or, instead of perpetually investing in processes that cage us further within the same frameworks we might use resources to ensure evaluations and evaluative processes are tailored, meaningful, and localized—not just 'cut and paste' exercises recycling the same ToRs over and over.
---
BONUS POINT: an Artificial Intelligence (AI) driven twist:
I asked AI to take the stance of a proponent of evaluability assessment and critique my response. You know what? I do see the points in the AI generated response and I am aware that proponents are motivated by strong and genuine aspirations. However, I also see how aspirational they are. The reality, as it also surfaced from some posts in this conversation, is that evaluability assessment risks becoming yet another layer of bureaucracy or narrow checks rather than the transformative tool it aspires to. And it is likely to involve short-term stunts by external consultants. The investment in engagement, capacity, and support that this aspirational approach would entail is likely not to be there: if projects are not evaluable it is not often a quick fix that would make it so. Call me pessimistic but I remain skeptical. And I have plenty of ideas about where money could be spent instead.
[Below the AI response to my request mentioned above]
"While I appreciate the concerns raised about the potential redundancy of Evaluability Assessments (EA) in well-structured projects, I believe it's important to consider the unique value that EA brings to the table—beyond merely filling gaps in existing M&E frameworks.
First, EA serves as more than just a checklist or audit of what should already be in place; it provides an opportunity to rigorously test the logic and feasibility of a project before significant resources are expended. Even with a sound logframe and M&E framework, projects can sometimes miss critical assumptions or overlook potential risks that only become apparent through a dedicated evaluability process. EA offers a safeguard, ensuring that projects are genuinely ready for evaluation and that the data collected will be meaningful and actionable.
Moreover, while it's true that most projects today are highly structured, the complexity and scale of many interventions mean that gaps can still emerge despite the best-laid plans. EA allows for a focused examination of these potential gaps, offering a proactive approach to problem-solving rather than waiting for issues to arise during the evaluation phase. This proactive stance can ultimately save time and resources by identifying and addressing issues early on, rather than scrambling to fix them after the fact.
Regarding the investment in external consultants, it’s worth noting that EA can also be localized and integrated into the existing capacities of organizations. Rather than viewing EA as an additional, external process, it can be an opportunity to build internal capacity and foster a culture of continuous improvement. By embedding evaluability within the organization, we not only reduce reliance on external consultants but also ensure that evaluations are tailored, nuanced, and truly reflective of the project's context.
In conclusion, while strengthening M&E systems is undoubtedly important, EA should not be dismissed as redundant or unnecessary. Instead, it should be seen as a strategic investment in ensuring that evaluations are effective, relevant, and capable of delivering real insights that drive better outcomes."
Silva Ferretti
Freelance consultantHello again...
is evaluability really about checking
"can managerial requirements be satisfied" or rather about
"Understanding what evaluation approach suits the project and its context"?
I see this tendency of using evaluability as a top-down assessment
surfacing over and over-
further narrowing the scope of the evaluation rather than expanding it.
Silva Ferretti
Freelance consultantI agree with you [Rick Davies contribution below] on many things, and yes, it's key to have a reality check on what an evaluation should and can do.
Silva Ferretti
Freelance consultantThe issue I have with "evaluability" is that all programs can (and should) be evaluated.
Some programs lend themselves to more "rigorous" and mainstream approaches, while others, due to their inherent complexity, require different methodologies.
These alternative approaches can help build an understanding of dynamics that did not previously exist... which is a GOOD thing. Not all programs need to be preformatted. This can be especially beneficial for participatory and innovative/adaptive programs that need to determine the best way forward as they progress.
It is crucial that we do not use evaluability as another brick in the wall for making evaluations just about compliance with results or a cheaper process of standard research applied to arbitrary theories of change.
Instead, we should reclaim evaluation as an opportunity to assess changes in ways that are coherent with the program's intent and context. So the question is not "Can the program be evaluated with a standard toolbox" (which is what evaluability risks becoming) but rather "What evaluative approach is best suited to this context."
When I read, "*Some project designs are manifestly unevaluable, and some M&E frameworks are manifestly inadequate at first glance. In these circumstances, an Evaluability Assessment would not be needed to decide whether to go ahead with an evaluation. Efforts need to focus on the more immediate tasks of improving project design and/or the M&E framework*."(https://www.betterevaluation.org/methods-approaches/themes/evaluability-assessment)
I really want to cry....
We are confusing the project documentation with the reality of the work. In my experience, many projects had sketchy logograms or theories of change, yet were terrific. The evaluation helped systematize them and unpack learning and value. If we keep looking only at what is preformatted, we miss the chance of real transformation.