Jean Providence Nzabonimpa (PhD) is a social, behavioral, educational, and public health researcher and evaluator, a development and humanitarian practitioner with 16 years of experience in project design, implementation, performance monitoring, outcome and impact evaluation, social marketing, and applied research. Using behavior change theories and communication as an approach to achieve project outcomes and impact in public health, education, and other social development sectors, currently keen on human face in technology, he brings rigorous methodological approaches to development interventions, generating and using evidence for decision-making and impact.
With specialization in mixed methods research, he innovates methodologically when it comes to impact, behavior change, and user experience research and evaluation. With more than 30 research and evaluation studies, coupled with a strong background in education, language use, public health, and capacity development, he uses advanced social science evaluative, analytical, communicative, and programmatic knowledge and skills to generate evidence and insights to impact the lives of the poor and vulnerable people. Since 2009, he is an apt user and advocate of ICT in program monitoring and evaluation for real-time access to data and evidence. Expert user and trainer in data analysis using SPSS (expert level) and STATA (lesser extent) for quantitative data analysis, ATLAS.ti and MAXQDA for qualitative data analysis. He is SCRUMMaster certified, Core Humanitarian Certified, ATLAS.ti certified professional trainer, and a certified peer-reviewer.
Jean Providence Nzabonimpa
Regional Evaluation Officer United Nations World Food ProgrammeColleagues, massive thanks for going extra miles to provide additional and new perspectives to this discussion. These include sequential, concurrent, and parallel mixed methods (MM) designs. Some analyses are performed separately while others bring data analysis from either method strand to corroborate trends or results emanating from the other method strand.
One of the latest contributions include these key points:
“The evaluators will […] perform data triangulation by cross-referencing the survey data with the findings from the qualitative research and the document review or any other method used. […] Sometimes a finding from the qualitive research will be accompanied by the quantitative data from the survey” Jackie.
“Mixed methods is great, but the extent of using mixed methods and sequencing should be based on program and evaluation circumstances, otherwise instead of answering evaluation questions of a complex or complicated program, we end up with data constipation. Using all sorts of qualitative methods at once i.e., open ended surveys, KIIs, community reflection meetings, observations, document reviews etc. in addition to quantitative methods may not be that smart.” Gordon.
Lal: Thanks for sharing on two projects one on "a billion-dollar bridge to link up an island with the mainland in an affluent Northern European country while the second is a multi-million-dollar highway in an African country". This is an excellent example of what can go wrong in the poor design of projects and inappropriate evaluation of such projects. Are there any written reports/references to share? This seems to be a good source of insights to enrich our discussions and, importantly, our professional evaluation practice using mixed methods. I so much like the point you made: "the reductive approach made quality and quantity work against project goals". Linking to the projects used for illustration, you very well summarized it: "the emergency food supplies to a disaster area cannot reasonably meet the same standards of quality or quantity, and they would have to be adjusted to make the supply adequate under those circumstances".
Olivier: you rightly argue and agree that sequential exploratory designs are appropriate: "you cannot measure what you don't conceive well, so a qualitative exploration is always necessary before any measurement attempt". But also, you acknowledge that: "there is also room for qualitative approaches after a quantification effort”. You are right about that: in some cases, a survey may yield results that appear odd, and one way to make sense of them is to "zoom" on that particular issue through a few additional qualitative interviews.
Gordon: Mea culpa, I should have specified that the discussion is about the evaluation of programme, project or any humanitarian or development intervention. You rightly emphasize the complexity that underlies programmes: “programs are rarely simple (where most things are known) but potentially complicated (where we know what we don't know) or complex (where we don't know what we don't know)”. One argument you made seems to be contradictory: “when something is too complicated or complex, simplicity is the best strategy!” Some more details would add context and help readers make sense of the point you raised. Equally, who between the evaluator and programme team should decide the methods to be used?
While I would like to request all colleagues to read all contributions, Jackie’s submission is different, full of practical tips and tricks used in mixed methods.
Jackie: Thanks so much for taking time and provide insightful comments. As we think about our evaluation practice, may you explain how “all evaluation questions can be answered using a mixed method approach”? In your view, the data collection tools are developed in parallel, or concurrently. And you argue that there is ONE Evaluation Design Matrix, hence both methods attempt to answer the same question. For sampling would you clarify how you used probabilistic or non-probabilistic sampling, or at least describe for readers which one you applied, why and how? Would there be any problem if purposive sampling is applied for a quantitative evaluation?
Except a few examples, most of the contributions are so far more theoretical, hypothetical than practical, lived experiences. I think what can help all of us as evaluators is practical hints and tricks, including evaluation reports or publications that utilized mixed methods (MM). Please go ahead and share practical examples and references on:
Looking forward to more contributions.
Jean Providence Nzabonimpa
Regional Evaluation Officer United Nations World Food ProgrammeThis discussion is interesting and intriguing especially based on the multidisciplinary background of the contributors. I will be abbreviating Mixed Methods as MM in this discussion. Without pre-empting further ideas and fresh perspectives colleagues are willing to share, allow me to request further clarification for our shared learning. This is not limited to colleagues whose names are mentioned, it’s an open discussion. Feel free to share the link to other platforms or networks as well.
Consider these viewpoints before delving into further interrogations. Keep reading, icing on the cake comes after:
“Successful cases [of MM in evaluation] occur when the integration process is well-defined or when methods are applied sequentially (e.g., conducting focus groups to define survey questions or selecting cases based on a survey for in-depth interviews).” Cristian Maneiro.
“five purposes for mixed-method evaluations: triangulation, complementarity, development, initiation, and expansion (also summarized in this paper)” shared by Anne Kepple. I encourage all MM practitioners and fans to read this article.
“A good plumber uses several tools, when and as necessary, and doesn't ask himself what type of plumbing requires only one tool... Likewise, a good evaluator needs to know how to use a toolbox, with several tools in it, not just a wrench” Olivier Cossée.
“The evaluation also analyzed and explained the quantitative results with information from qualitative methods, which not only allowed characterizing the intervention, educational policy and funding, but also led to more relevant policy recommendations” Maria Pia Cebrian.
Further queries:
Happy learning together!
Jean Providence Nzabonimpa
Regional Evaluation Officer United Nations World Food ProgrammeDear evaluators and colleagues,
Thanks so much to those of you who took active part in this discussion, replying to my follow up questions and comments, and to all the others who read the contributions for learning!
The discussion was rich and insightful, and raised the attention on the rationale for applying MM as well as to some persisting challenges and gaps in Mixed Methods practical applications.
Bottom line, Mixed Methods are surely here to stay. However, on the one hand there are innovative and revolutionary tools including Big Data, artificial intelligence, and machine learning which have started dictating how to gather, process, and display data. On the other hand, there are methodological gaps to fill. As evaluators we have a role to play to ensure MM is not merely mentioned in TORs and followed superficially but appropriately used both in theory and practice.
I am going to share a summary of the discussion with some personal methodological reflections soon, so please stay tuned!
JP