How are mixed methods used in programme evaluation?
Evaluation in different development and humanitarian settings requires varying methods to capture multiple voices and multifaceted trends. Indeed, even the purists in quantitative methods have started incorporating some qualitative methods into randomized control trials (RCTs), having excluded them in the past or suspected them of being less rigorous.
Complexities in development programmes are a great opportunity to rethink evaluation methods. This, among other things, has led to mixed methods in evaluation [1]. Mixed-method evaluators combine at least one quantitative method with at least one qualitative method [2], helping to broaden and understand how and in what context outcomes and impacts are achieved [3]. (Note that using, say, in-depth interviews and focus group discussions is not mixed-method evaluation; this is merely using methods of the same family and worldview).
I have seen numerous evaluation terms of reference and protocols that mention mixed methods. Sounds great, right? Sadly, it is too often a cliché in many terms of reference and evaluation reports. Mixed methods are mentioned here and there, and overused as a yardstick of all that matters in evaluation.
Bamberger [4] shows and recommends that evaluators should not limit mixed methods to data collection. Rather, he argues for the use of mixed methods even in forming teams of evaluators. He also cites mixed methods at the stage of formulating evaluation questions. Have you ever thought about using mixed methods in testing, or generating hypotheses and sampling for both qualitative and quantitative methods? What about collecting and analysing both types of data, presenting or discussing results? When qualitative and quantitative methods, data, and results are not methodically integrated, it is basically two studies or two evaluations, not a single evaluation.
I would be grateful if you could provide some links to evaluation reports and publications where mixed methods are used. Importantly, I would appreciate if you could share specific, practical experiences and lessons on how qualitative methods (have) interact(ed) with quantitative methods:
1. In the evaluation design stage – What types of evaluation questions necessitate(d) mixed methods? What are the (dis)advantages of not having separate qualitative or quantitative evaluation questions?
2. When developing data collection instruments for mixed methods evaluation – Are these instruments developed at the same time or one after another? How do they interact?
3. During sampling – Is sampling done differently or does it use the same sampling frame for each methodical strand? How and why?
4. During data collection – How and why are data collected (concurrently or sequentially)?
5. During data analysis – Are data analysed together/separately? Either way, how and what dictates which analytical approach?
6. During the interpretation and reporting of results – How are results presented, discussed and/or reported?
I’m looking forward to learning from and with you all!
Jean Providence
---
[1] Using Mixed Methods in Monitoring and Evaluation, Experiences from International Development, The World Bank, 2010
[2] Designing and Conducting Mixed Methods Research, J.W. Creswell and V.L. Plano Clark, SAGE, 2017
[3] Introduction to mixed methods in impact evaluation, M. Bamberger, 2012
[4] Using mixed methods to strengthen process and impact evaluation, M. Bamberger, 2022
Joseph Toindepi
International Development Consultant JT Development ConsultingDear Evaluators and Colleagues
I followed the conversations and contributions on this topic with much interest. I have been a mixed methods practitioner for several years both as a consultant and as part of my day job in the international development sector.
My experience is that applying mixed methods in evaluations is easier said than done. The main challenge is misaligned expectations and or understanding of mixed methods between commissioners of evaluations and those assigned to conduct evaluations. As a consultant regularly develop technical proposals or expression of interests for evaluation tenders. This process requires me to review several evaluation ToRs per day and I find that at least four in five ToRs specifically suggests or require mixed methods approach to be used. However, in the majority of the cases, there is often not sufficient time and or budget allocation to align with minimum necessities for carrying out a decent logical mixed methods approach.
Good evaluations require a good balance of complementary and, or supplementary quantitative and qualitative evidence. This therefore means regardless of whether the evaluation commissioner or the project has specifically asked for mixed methods, it is difficult in most cases to sufficiently address the standard evaluation questions without considering both quantitative and qualitative data. This means as an evaluator, you have only two choices; (1) either to operate within the time and budget limitations which might compromise the quality of evaluation results and impact on your professional integrity or (2) pay the price difference and go over and above the allocated time and budget in order to deliver quality.
Finally, I would say that the demand for mixed methods or expectations in program evaluations is here to stay but the development sector is still a long way in realising the need to aligning those expectations with necessary enablers; particularly, pricing and timeframes.
JT
Jean Providence Nzabonimpa
Regional Evaluation Officer United Nations World Food ProgrammeDear evaluators and colleagues,
Thanks so much to those of you who took active part in this discussion, replying to my follow up questions and comments, and to all the others who read the contributions for learning!
The discussion was rich and insightful, and raised the attention on the rationale for applying MM as well as to some persisting challenges and gaps in Mixed Methods practical applications.
Bottom line, Mixed Methods are surely here to stay. However, on the one hand there are innovative and revolutionary tools including Big Data, artificial intelligence, and machine learning which have started dictating how to gather, process, and display data. On the other hand, there are methodological gaps to fill. As evaluators we have a role to play to ensure MM is not merely mentioned in TORs and followed superficially but appropriately used both in theory and practice.
I am going to share a summary of the discussion with some personal methodological reflections soon, so please stay tuned!
JP
Jackie (Jacqueline) Yiptong Avila
Program Evaluator/Survey Methodologist Independent ConsultantDear Jean,
In response to your follow-up to my contribution to this discussion [1], I would first like to thank Malika Bounfour and Marlene Roefs for sharing two very valuable documents.
Please refer to Mixed methods paper by WUR_WECR_Oxfam_0.pdf page 6 the definition of The convergent parallel design. This is the approach that I use when I mentioned the data collection tools are developed in parallel, or concurrently. As for ONE Evaluation Design Matrix, this follows what is also described as The embedded design also on page 6.
With regards to sampling methods, I invite my colleagues to check this Statistics Canada link; I have worked for this national statistical agency for over 30 years as survey methodologist.
https://www150.statcan.gc.ca/n1/edu/power-pouvoir/toc-tdm/5214718-eng.htm
In particular, for the distinction between probability or probabilistic sampling (used in quantitative surveys) and non-probabilistic sampling as in purposive sampling ( used in qualitative data collection) please refer to 3.2 sampling https://www150.statcan.gc.ca/n1/edu/power-pouvoir/ch13/prob/5214899-eng.htm
You will note in Section 3.2.3 that why Purposive sampling is being considered by some for use in quantitative surveys. This is not the practice in official statistical agencies. It is also explained why non-probability sampling should be used with extra caution.
I hope that this is helpful. I note the following on page 13 of Mixed methods paper by WUR_WECR_Oxfam_0.pdf shared by Marlene “Combining information from quantitative and qualitative research gives a more comprehensive picture of a programme’s contribution to varies types of (social) change". I fully agree with this statement.
Thank you for bringing up this topic on EvalForward which as you mentioned, it has raised a lot of interest in the group. I personally favour the mixed methods approach. In fact, in my experience, findings from a mixed methods evaluation receives less “push-back” from my clients. It is hard to argue/oppose the findings when triangulation shows the same results from different sources.
Kindest regards,
Jackie
P.S. I have provided this link EVALUATIONS IN THE DEC: https://dec.usaid.gov/dec/content/evaluations.aspx where you will find hundreds of examples of evaluations that has used mixed methods.
[1] Jackie: Thanks so much for taking time and provide insightful comments. As we think about our evaluation practice, may you explain how “all evaluation questions can be answered using a mixed method approach”? In your view, the data collection tools are developed in parallel, or concurrently. And you argue that there is ONE Evaluation Design Matrix, hence both methods attempt to answer the same question. For sampling would you clarify how you used probabilistic or non-probabilistic sampling, or at least describe for readers which one you applied, why and how? Would there be any problem if purposive sampling is applied for a quantitative evaluation?
Margrieth Nazarit Cortés
Docente Universidad Mariano GálvezHi Jean Providence,
reflecting on your interesting comments, I think we still have a great challenge in the process of "disciplinary" triangulation in a specific way, and not limiting it only to the triangulation of data according to their origin. In other words, triangulation processes, as stated by Okuda and Gómez, and SAMAJA, may well refer to the use of various methods (both quantitative and qualitative), data sources, theories, researchers or environments in the study of a phenomenon. I consider that most cases refer more to the triangulation of data as in the case of this evaluation of the "Cooperating Basque Youth" programme of 2018 (in Spanish), but I shall continue to investigate more about the implementation of a more interdisciplinary triangulation.
[Translated from Spanish]
Gordon Wanzare
Monitoring, Evaluation, & Learning ExpertDear Jean and colleagues.
Thanks for clarifying that the discussion is not only limited to programs but also includes projects or any humanitarian or development intervention. Very informative and rich discussion. I am learning a lot in the process!
When I say "when something is too complicated or complex, simplicity is the best strategy" in the context of Evaluations, I mean we do not need to use an array of, or several methodologies and data sources for an evaluation to be complexity-aware. We can keep the data, both quantitative and qualitative lean and focused on the evaluation objectives and questions. For example, use of complexity-aware evaluation approaches such as Outcome Harvesting, Process Tracing, Contribution Analysis, Social Network Analysis e.t.c does not necessarily mean several quant and qual data collection methods have to be applied. For example, in OH, you can use document review and KII to develop outcome descriptors then do a survey and KII during substantiation. I have used SNA and KII to evaluate change in relationships among actors in a market system. I have used SNA followed by indepth interviews in a social impact study of a rural youth entrepreneurship development program. In essence, you can keep the data collection methods to three ( The three legged stool or the triangle concept) and still achieve your evaluation objectives with lean and sharp data. A lot has been written on overcoming complexity with simplicity in different spheres of life, management, leadership e.t.c.
On the issue of who decides on the methodology, evaluator or program team? From my experience, a MEL plan is very clear on the measurements and evaluation methods. And, MEL plans are developed by the program team. Evaluators are asked to propose an evaluation methodology in the technical proposals to serve two purposes - that is to assess their technical competence and to identify the best fit with the evaluation plan. Topically, the evaluator and program team will consultatively agree on the best fit methodology during inception phase of the evaluation and this forms part of the inception report which is normally signed off by the program team.
My thoughts.
Gordon
Malika Bounfour
President Association Ayur pour le Développement de la femme RuraleGreetings to all
Thank you for this topic as methodology usually dictates the quality of the report.
Some of my takes on mixed methods through examples:
Q2. for mixed methods evaluation – Are these instruments developed at the same time or one after another? How do they interact?
The ideal situation is to have all the instruments ready beforehand. However some qualitative instruments may give room for “improvement”. Example in key informant interviews or focus groups, open-ended questions help improve the qualitative data gathering in case unexpected results wee found/observed. .
Here is a reference from the world bank that describes situations where the quantitative and qualitative methods are used. It is for impact evaluation but I find it valuable for most study/research situations.
Impact evaluation in practice
Best regards
Malika
Candice Morkel
CLEAR Anglophone AfricaLoving this discussion JP - what I’ve also observed is the use of multiple methods (incl quant & qual), without the “mixed” part in the analysis and findings. They’re presented as two separate sets of findings, without a meaningful synthesis of the data. I think mixed methods is one of the most misunderstood topics in evaluation (and research).
[comment re-posted from Linkedin]
Jean Providence Nzabonimpa
Regional Evaluation Officer United Nations World Food ProgrammeColleagues, massive thanks for going extra miles to provide additional and new perspectives to this discussion. These include sequential, concurrent, and parallel mixed methods (MM) designs. Some analyses are performed separately while others bring data analysis from either method strand to corroborate trends or results emanating from the other method strand.
One of the latest contributions include these key points:
“The evaluators will […] perform data triangulation by cross-referencing the survey data with the findings from the qualitative research and the document review or any other method used. […] Sometimes a finding from the qualitive research will be accompanied by the quantitative data from the survey” Jackie.
“Mixed methods is great, but the extent of using mixed methods and sequencing should be based on program and evaluation circumstances, otherwise instead of answering evaluation questions of a complex or complicated program, we end up with data constipation. Using all sorts of qualitative methods at once i.e., open ended surveys, KIIs, community reflection meetings, observations, document reviews etc. in addition to quantitative methods may not be that smart.” Gordon.
Lal: Thanks for sharing on two projects one on "a billion-dollar bridge to link up an island with the mainland in an affluent Northern European country while the second is a multi-million-dollar highway in an African country". This is an excellent example of what can go wrong in the poor design of projects and inappropriate evaluation of such projects. Are there any written reports/references to share? This seems to be a good source of insights to enrich our discussions and, importantly, our professional evaluation practice using mixed methods. I so much like the point you made: "the reductive approach made quality and quantity work against project goals". Linking to the projects used for illustration, you very well summarized it: "the emergency food supplies to a disaster area cannot reasonably meet the same standards of quality or quantity, and they would have to be adjusted to make the supply adequate under those circumstances".
Olivier: you rightly argue and agree that sequential exploratory designs are appropriate: "you cannot measure what you don't conceive well, so a qualitative exploration is always necessary before any measurement attempt". But also, you acknowledge that: "there is also room for qualitative approaches after a quantification effort”. You are right about that: in some cases, a survey may yield results that appear odd, and one way to make sense of them is to "zoom" on that particular issue through a few additional qualitative interviews.
Gordon: Mea culpa, I should have specified that the discussion is about the evaluation of programme, project or any humanitarian or development intervention. You rightly emphasize the complexity that underlies programmes: “programs are rarely simple (where most things are known) but potentially complicated (where we know what we don't know) or complex (where we don't know what we don't know)”. One argument you made seems to be contradictory: “when something is too complicated or complex, simplicity is the best strategy!” Some more details would add context and help readers make sense of the point you raised. Equally, who between the evaluator and programme team should decide the methods to be used?
While I would like to request all colleagues to read all contributions, Jackie’s submission is different, full of practical tips and tricks used in mixed methods.
Jackie: Thanks so much for taking time and provide insightful comments. As we think about our evaluation practice, may you explain how “all evaluation questions can be answered using a mixed method approach”? In your view, the data collection tools are developed in parallel, or concurrently. And you argue that there is ONE Evaluation Design Matrix, hence both methods attempt to answer the same question. For sampling would you clarify how you used probabilistic or non-probabilistic sampling, or at least describe for readers which one you applied, why and how? Would there be any problem if purposive sampling is applied for a quantitative evaluation?
Except a few examples, most of the contributions are so far more theoretical, hypothetical than practical, lived experiences. I think what can help all of us as evaluators is practical hints and tricks, including evaluation reports or publications that utilized mixed methods (MM). Please go ahead and share practical examples and references on:
Looking forward to more contributions.
Marlene Roefs
Senior Advisor Monitoring and Evaluation Wageningen Centre For Development InnovationThank you colleagues, interesting discussion! Together with Oxfam, we reviewed our use of mixed methods and extracted some very practical insights some time ago. I have attached the paper for those interested.
Jackie (Jacqueline) Yiptong Avila
Program Evaluator/Survey Methodologist Independent ConsultantDear Jean,
Thank you for bringing up this topic. I also wish to thank Renata Mirulla for her good work in administrating this forum. I am throwing my two cents' worth in this discussion as I have conducted using the mixed method approach in most if not all of my work in evaluation. If fact, it was mandatory that I used the mixed method in the evaluation of the USAID and USDA funded projects. You can find the reports by thematic on this page (see EVALUATIONS IN THE DEC: https://dec.usaid.gov/dec/content/evaluations.aspx)
Please find below my reply to your questions. At the bottom of this document, I show how I have used the mixed methods approach in the evaluation of two Food for Progress Projects in The Gambia and Senegal. Please do not hesitate to contact me if you have any questions.
Kind regards,
Jackie
1. In the evaluation design stage – What types of evaluation questions necessitate(d) mixed methods? What are the (dis)advantages of not having separate qualitative or quantitative evaluation questions?
All evaluation questions can be answered using a mixed method approach. When designing the Evaluation Design Matrix or Framework, for each of the evaluation questions the evaluator will identify the informant(s) and the data collection method that will be used with the corresponding method of analysis. For example, for a quantitative survey, the method will be a statistical data analysis method such as descriptive analysis, inferential analysis; for the qualitative research method, content analysis and thematic analysis can be performed.
2. When developing data collection instruments for mixed methods evaluation – Are these instruments developed at the same time or one after another? How do they interact?
Yes, in parallel. There is ONE Evaluation Design Matrix using a mixed method approach; there is still ONE Evaluation not two. Both methods will attempt to answer the same question. The Evaluation may decide to obtain information for a particular question using only one method for example the qualitative method for evaluation questions pertaining to RELEVANCE.
3. During sampling – Is sampling done differently or does it use the same sampling frame for each methodical strand? How and why?
We identify the informants, and we make the list for each type.
The quantitative survey will not survey all the informants who are targeted by the evaluation. Surveys are conducted usually for large populations for example the programs beneficiaries and a probabilistic sample of the population unit will be selected. A sample frame is needed i.e., the list of the survey population (list frame) or an area frame as in the case of a household survey. Note that the population are not always people, they can be schools or farms for example. More than one survey can be conducted in an evaluation for example a household survey, a survey of farmers, a survey of intrant providers and a client satisfaction survey of services received from let say a lending or microfinancing institution depending on the program activities. It all depends on what we are trying to find out. Note that the quantitative surveys will provide the data that can be used to calculate the performance indicators as well as providing the characteristics of the target population and prevalence of a situation or behavior e.g. number of percentage of farmers who do not have a certain type of equipment; number of household who eat less than 3 meals a day. The data is weighted to the population of interest and the estimates are produced for the entire population or subsets of the population for example gender and age groups (demographic variables).
The qualitative data collection will target stakeholders for example government officials, program officers, suppliers for Key Informant or Semi-Structured interviews. Focus group discussion are conducted with a sample of the larger population of interest or stakeholders for example farmers, community health officers. In the case of qualitative research, purposive sampling is the technique used to select a specific group of individuals or units for analysis. Participants are chosen “on purpose,” not randomly. It is also known as judgmental sampling or selective sampling. The information gathered cannot be generalised to the entire population. The main goal of purposive sampling is to identify the cases, individuals, or communities best suited to help answer the research or evaluation questions.
Note that there is no correct or universally recognized method for calculating a sample size for purposeful sampling whereas in quantitative surveys there are formulas to determine the sample size with the desired reliability level of the estimates.
4. During data collection – How and why are data collected (concurrently or sequentially)?
Data is collected concurrently since there is one deadline and a single evaluation report to submit.
Qualitative data collection is usually performed by one person (I like to add a note taker or record the interviews with the permission of the informant for quality assurance). Surveys are carried out by team of trained enumerators which makes the process quite expensive. These days, data collection is usually performed using tablets instead of paper questionnaires. The survey data must be edited (cleaned) before the data is analysed. Surveys can also be conducted by phone or online depending on the type of informants, but the response rate is lower than face to face interviews.
5. During data analysis – Are data analysed together/separately? Either way, how and what dictates which analytical approach?
The data is analysed separately. The evaluators will then perform data triangulation by cross-referencing the survey data with the findings from the qualitative research and the document review or any other method used.
6. During the interpretation and reporting of results – How are results presented, discussed and/or reported?
There is only one Evaluation Report, with the quantitative data accompanied by a narrative and explanation/confirmation/justification of findings from the qualitative research and secondary sources. Sometimes a finding from the qualitive research will be accompanied by the quantitative data from the survey. For example, in a Focus Group discussion, farmers have reported that they cannot buy fertilisers because it is too expensive. The survey can ask the same question and provide the percentage of farmers not able to purchase fertilisers but in addition the survey can tell if this issue exits in all geographical areas. In depth qualitative interviews can provide other reasons why they cannot buy fertilisers.
The Mixed Method approach allows to take advantage of the respective strengths of the qualitative and quantitative method in collecting and analyzing information to answer the research/evaluation questions.
Examples of Evaluation using the Mixed Method approach:
Mid-Term Evaluation Millet Business Services Project
Four surveys were conducted:
Gordon Wanzare
Monitoring, Evaluation, & Learning ExpertGreetings to all!
Great discussion question from Jean and very insightful contributions!
First, I think Jean's question is very specific - that is how mixed methods are used not just in evaluations but PROGRAM evaluations, right? Then, we know that a program consists of two or more projects i.e. a collection of projects. Therefore, programs are rarely simple (where most things are known) but potentially complicated (where we know what we don't know) or complex (where we don't know what we don't know). Oxford English dictionary tells us that a method is a particular procedure for accomplishing or approaching something. Tools are used in procedures. I am from the school of thought that believes that when something is too complicated or complex, simplicity is the best strategy!
Depending on the context, program design, program evaluation plan, evaluation objectives and questions, the evaluator and program team can agree on the best method(s) that helps achieve the evaluation objectives and comprehensively answers the evaluation questions. I like what happens in the medical field, in hospitals where, except in some emergency situations, a patient will go through triage, clinical assessment and historical review by the Doctor, laboratory examination, radiology e.t.c then the doctor triangulates these information sources to arrive at a diagnosis, prognosis, treatment/management plan. Based on circumstances and resources, judgements are made whether all these information sources are essential or not.
Mixed methods is great, but the extent of using mixed methods and sequencing should be based on program and evaluation circumstances, otherwise instead of answering evaluation questions of a complex or complicated program, we end up with data constipation. Using all sorts of qualitative methods at once i.e. open ended surveys, KIIs (Key Informant Interviews), community reflection meetings, observations, document reviews e.t.c in addition to quantitative methods may not be that smart.
In any case, perhaps, the individual projects within the program have already been comprehensively evaluated and their contribution to program goals documented, and something simple like a review, is what is necessary at the program level.
When complicated or complex, keep it simple. Lean data.
My thoughts.
Thanks.
Gordon
Olivier Cossée
Senior Evaluation Manager FAOThis is in response to the question of Jean Providence: Are there cases you might have seen in the professional conduct of evaluation where quantitative methods PRECEDED qualitative methods?
I would say that 1) you cannot measure what you don't conceive well, so a qualitative exploration is always necessary before any measurement attempt. If my wife visits some furniture shop and text me: "Honey, I found this marvelous thing for the kitchen, cost 500 and it's 2 m. long and 1.5 m. wide. You agree?" I wouldn't know what to answer, because in spite of all the numbers she gave me, I have no idea what she is talking about, qualitatively. Does she mean a table, a cupboard or a carpet? It makes no sense to quantify anything without first qualifying it.
2) This being said, there is also room for qualitative approaches after a quantification effort. You are right about that: in some cases, a survey may yield results that appear odd, and one way to make sense of them is to "zoom" on that particular issue through a few additional qualitative interviews.
Hope this makes sense.
Lal - Manavado
Consultant Independent analyst/synthesistGreetings!
I ought to have said ‘acting in silos’ since thinking is an action. Well, it’s a phrase someone invented during the discussions that led to the determination of the current set of SDG’s. After all, it just another phrase to describe reductivist thought and action, just like calling a spade a field entrenching tool (US army).
Before I go further, let me recap my point of departure:
A sound evaluation of a proposed or achieved outcome of a project/plan is concerned with ascertaining its adequacy to serve its intended purpose under the circumstances in which it is carried out.
Obviously, the key-words here are its ‘adequacy’ and ‘the circumstances under which it is carried out.’ Thus, we have three items to take into account, viz., a fixed one, i.e., an intended purpose or a goal which however may or may not be achieved depending on the very circumstances involved. Let me illustrate this with the help of two examples that appeared on this forum for a while ago. One involves a billion Dollar bridge to link up an island with the mainland in an affluent Northern European country while the second is a multi-million dollar highway in an African country.
Both were very adequate qualitatively and quantitatively; their technical quality was excellent while their capacity was large. In both instances, some critical circumstances were totally ignored leading to their failure with respect to their intended aims. Here, the reductive approach made quality and quantity work against project goals.
What happened was this; that bridge was intended to enable the residents of the island to drive to work in a town on the mainland in all weathers, which would be easier than using the ferry to do so as they have been doing. Toll from this commute was hoped to cover the building and running expenses of the bridge.
But as soon as it was completed, the islanders used the bridge to move out of the island and settle down near their work place and using their old homes as summer houses! So, nothing more needs to be said about the relevance of quality and quantity here, for the planners did not consider the circumstance that the islanders might just move out. They were compelled to remain, because the ferry is not a convenient means to move house.
In the case of the highway, the purpose was to initiate an economic growth in the villages through which it passed. It was believed to help the villagers to move their produce to better markets and the investors to come in.
But the planners failed to notice the circumstance that the villagers did not have even a bare minimum of motor transport and the poverty of the area remains unchanged while an occasional goat enjoys an undisturbed stroll on a modern highway.
So, the adequacy of an outcome has a qualitative and a quantitative component which are governed by the relevant circumstances under which a project or a plan is carried out. In my previous note, I pointed out that the emergency food supplies to a disaster area cannot reasonably meet the same standards of quality or quantity, and they would have to be adjusted to make the supply adequate under those circumstances
Hope this makes my points a bit clearer.
Cheers!
Lal.
Hadera Gebru Hagos
Senior Consultant, Natural Resource Management and Livestock Specialist Freelance ConsultantI would like to thank and appreciate all the contributors to the ongoing discussion, which I find it very interesting, awareness raising and inducing rethinking of evaluation methodologies. The discussion has surfaced experiences of different intellectuals with basic research and applied research background and of development professionals. The various experiences, I believe have deepened and widen the understanding with regard to “how mixed methods are used in programme evaluations”.
I am development professional in the area of agriculture (livestock and fisheries and natural resource management). From my experience the “how mixed methods are used in development/program evaluations” often depends on the type of data/information to be evaluated. Thus, depending the nature of the development program/project to be evaluated, the required data could be for example, quantitative and qualitative data. As we are all aware quantitative data are information that can be quantified, counted or measured, and given a numerical value. While qualitative data is descriptive in nature, expressed in terms of language rather than numerical values.
I would also like to relate this to “Logical Frame work Approach of project planning”(project which will be later evaluated during implementation). To my understanding most development programs have “Logframe” which clearly shows: program/project goals; outcomes; outputs; activities along with narrative summaries; objectively verifiable indicators; means of verification and assumptions. Thus during evaluation, the program/project will be evaluated based to what is put in the logeframe , which would require mixed evaluation methods depending the nature of the program/project. For example, among others use of qualitative and qualitative method can help to conduct successful evaluation. Using both qualitative and qualitative methods will strengthen the evaluation. Apart from quantitative method, qualitative methods, to mention few such as focused group discussions; in-depth interviews; case studies etc. can be used.
Mambwe Emmanuel Mwewa
Project Assistant - Monitoring and Evaluation Medicines for HumanityDear Colleagues:
So many contributions have been made which encompass multiple view points in the discussion of the Mixed Methods approach in Evaluation.
My basic question is; How can we address this concern raised by JP while taking into consideration probable budget constraints in Evaluation processes and the limitations which skew evaluators towards quantitative approaches which are more cost effective? [Mostly without acquiring and application of high end software for Qual collation, transcription and analysis].
Thanks in advance!
John Hoven
independentQualitative methods are often descriptive. Has anyone used qualitative causal inference?
The goal of qualitative causal inference is to prove cause-and-effect, either looking back into the past, or forward into the future. My sense is that this approach relies heavily on unscripted interviews, where undiscovered issues are revealed by follow-up questions. (What do you mean? Can you give an example?)
Jean Providence Nzabonimpa
Regional Evaluation Officer United Nations World Food ProgrammeThis discussion is interesting and intriguing especially based on the multidisciplinary background of the contributors. I will be abbreviating Mixed Methods as MM in this discussion. Without pre-empting further ideas and fresh perspectives colleagues are willing to share, allow me to request further clarification for our shared learning. This is not limited to colleagues whose names are mentioned, it’s an open discussion. Feel free to share the link to other platforms or networks as well.
Consider these viewpoints before delving into further interrogations. Keep reading, icing on the cake comes after:
“Successful cases [of MM in evaluation] occur when the integration process is well-defined or when methods are applied sequentially (e.g., conducting focus groups to define survey questions or selecting cases based on a survey for in-depth interviews).” Cristian Maneiro.
“five purposes for mixed-method evaluations: triangulation, complementarity, development, initiation, and expansion (also summarized in this paper)” shared by Anne Kepple. I encourage all MM practitioners and fans to read this article.
“A good plumber uses several tools, when and as necessary, and doesn't ask himself what type of plumbing requires only one tool... Likewise, a good evaluator needs to know how to use a toolbox, with several tools in it, not just a wrench” Olivier Cossée.
“The evaluation also analyzed and explained the quantitative results with information from qualitative methods, which not only allowed characterizing the intervention, educational policy and funding, but also led to more relevant policy recommendations” Maria Pia Cebrian.
Further queries:
Happy learning together!
Cristian Maneiro
Senior Consultant Maestral International, PLAN EvalDear Colleagues:
Greetings from Uruguay!
I believe that the discussion brought up by Jean is very relevant. Mixed methods are undoubtedly a powerful strategy for addressing an evaluation object from different angles, and it is almost a standard practice in most evaluation Terms of Reference (TDRs) that are currently seen, whether for UN agencies or others.
However, I agree that sometimes the term becomes a cliché and is used without considering whether a mixed methods strategy is genuinely the most appropriate. It is assumed that different techniques (typically Key Informant Interviews and surveys) will provide complementary information, but there is often not a clear idea from the commissioners on how this information will be integrated and triangulated. In my view, successful cases occur when the integration process is well-defined or when methods are applied sequentially (e.g., conducting focus groups to define survey questions or selecting cases based on a survey for in-depth interviews).
Furthermore, I understand that with current technological developments, mixed methods have new potentialities. It's no longer just the typical combination of Key Informant Interviews and Focus Group Discussions with surveys; instead, it can include big data analysis using machine learning, sentiment analysis, and more.
Lal - Manavado
Consultant Independent analyst/synthesistGreetings to Emilia and other members!
As a person who ascertains the value of evaluation with reference to its pragmatic import of a project in planning or completed to any given extent, I am happy to see your identification of the current debate as reductive.
Of course, this mode of thought seems to be so deep rooted in almost every field, and what has been done so far to rid ourselves of this incubus appears to be to invent a new phrase to describe it, viz., ‘thinking in silos’. Its extension into evaluation results in the inevitable quality vs. quantitative discussion.
I think it would be fruitful to think of evaluation as an effort to determine the adequacy of an objective to be attained or achieved by a project. This adequacy naturally depends on a number of variables one has to take into consideration which in turn vary with the circumstances. Let me give a few examples:
To sum up then, evaluation may some day, would be concerned with adequacy of a result with respect to its quality and quantity optimally achievable under an existing set of circumstances.
Best wishes!
Lal.
Norbert TCHOUAFFE TCHIADJE
Associate Professor / Senior Researcher University of Ebolowa/Pan-African Institute for Development CameroonI agree with JP viewpoint.
We have to consider the tools box which fit with our evaluation.
The mixt approach is useful based on the context: Quality at the starting point and then quantity for evidence and measurement .
Thanks.
Anne Kepple
Senior Consultant, Food Security and Nutrition Statistics team FAODear colleagues,
For any of you who may not be aware of it, I highly recommend this book from my guru on applying mixed methods in evaluation: Mixed Methods in Social Inquiry, by Jennifer Greene.
Even though it was written 16 years ago, it is still very relevant. I especially like the five purposes for mixed-method evaluations: triangulation, complementarity, development, initiation, and expansion (also summarized in this paper: https://www.jstor.org/stable/1163620.
Anne Kepple
Olivier Cossée
Senior Evaluation Manager FAOHello Jean, and thank you for your question!
I think all evaluation questions require a mixed approach. Data collection tools are just tools; they should be used opportunistically -- when it works -- but definitely not idolized. That would be like a plumber who loves wrenches but doesn't like screwdrivers, and who tries to do everything with a wrench, including screwing in screws... That would be absurd: a good plumber uses several tools, when and as necessary, and doesn't ask himself what type of plumbing requires only one tool...
Likewise, a good evaluator needs to know how to use a toolbox, with several tools in it, not just a wrench.
I agree with Vincente that qualitative work must always PRECEDE a quantitative effort. Before measuring something, you need to know why and how to measure it, and for that you need a QUALITATIVE understanding of the object of measurement. One of the most common mistakes made by "randomistas" is precisely that they spend a lot of time and money on surveys that are too long and complex, because they don't know what's important to measure. So they try to measure everything with endless questionnaires, and regularly fail.
[Translated from French]
Maria Pia Cebrian
Evaluation Officer UN World Food ProgrammeDear all,
(And thanks JP for starting this discussion, I don't reply much but reading you all is always a great learning opportunity!).
From my perspective, methodology should always be at the service of the evaluation, that is, triangulating an evaluation based on the problem and the use of one or another evaluative methodology is relevant in terms of providing a more comprehensive analysis of the subject of evaluation and evaluation questions, as well as the recommendations that can be provided.
Regarding quantitative/qualitative methods, my two cents are this evaluation (in Spanish) that sought to demonstrate the effectiveness of the Peruvian government's investment in higher education through subsidies to people living in poverty. As part of the methodology, it used mincerian models, univariate, bivariate and multivariate analyses, as well as surveys with open and closed questions, review and analysis of secondary sources and in depth interviews. The evaluation also analyzed and explained the quantitative results with information from qualitative methods, which not only allowed characterizing the intervention, educational policy and funding, but also led to more relevant policy recommendations.
Unfortunately it is in Spanish, but it also has some of its results in this article in English: "Returns to university higher education in Peru: The Effect of Graduation”, in HUMAN Review, 11(2), 2022, pp. 59-72 (Scopus, EBSCO, ISOC, REDIB, Dialnet) (Salazar Cóndor, 2022a)
Best!
Pia
Vicente Plata
ConsultantDear Emilia and colleagues,
I think that your contribution of Prof. Shaffer is absolutely relevant. From my experience in the field with evaluation missions, it is absolutely vital to develop a set of methodologies able to analyse what we need to understand. And in my experience that set of methodologies is always a mix of qualitative (first, to "explore the reality") and quantitative (to "deduce trends and magnitudes") methodologies. And, as you said, it is hard to treat them as "absolutely qualitative" or "absolutely quantitative". A canny interrelationship between quali and quantitative methodologies is always one of the most challenging steps of the preparation process of the evaluation mission.
Looking to seeing more interesting comments on this absolutely interesting topic on evaluation, best regards
Vicente
Emilia Bretan
Evaluation Manager FAODear colleagues, and thank you Jean for provoking this.
Please bear with me while I bring a drop of academic content to the discussion, hoping we can expand it a bit.
What are mixed methods after all? I think the quanti x quali debate is quite reductionist; and honestly after all these decades I cannot believe we are still discussing if RCTs are the gold standard.
I would like to bring an approach that called my attention, which was presented by Professor Paul Shaffer from Trent University (Canada). His approach is focused mixed methods for impact assessment – but I understand it can be extrapolated to other types of studies – such as outcome assessment. What I like about his proposal is that it goes beyond and deeper the quanti + quali debate.
In his view, the categories that supposedly would differentiate quanti x quali approaches are collapsing. For example, (i) qualitative data is many times, quantified; (ii) large qualitative studies can allow for generalization (while scale/generalization would be a characteristic of quantitative studies), and (iii) induction and deduction inferences are almost always present.
In light of that, what are “mixed methods” ??
What “mixed methods” means is combining approaches that can bring robustness to your design, different perspectives/angles to look at the same object. Based on the questions you want to answer/what you want to test, ‘mixed methods’ for impact assessment could mean combining two or more quantitative methods. Therefore, different qualitative methods could be used to improve the robustness of an evaluation/research – and this would also be called ‘mixed methods’.
And then - going a bit beyond that: couldn’t we consider the mix of “colonizers’ “ with “indigenous “ approaches also “mixed methods”?
I hope this can contribute to the reflection.
Cheers
Emilia
Emilia Bretan
Evaluation Specialist
FAO Office of Evaluation (OED)
Uzodinma Adirieje
National President Nigerian Association of Evaluators (NAE) and Afrihealth Optonet AssociationThanks for this topic/focus.
I would add a 7th consideration on how qualitative methods interact or differ from quantitative methods:
During the dissemination of the results and recommendations – How are results and recommendations from qualitative and quantitative methods disseminated for the most effective outcome/impact, and to whom?
Is there evidence that certain dissemination approaches for qualitative and quantitative results are more effective/receptive to certain/most stakeholders than other approaches?
Dr. Uzodinma Adirieje
Past National President, Nigerian Association of Evaluators (NAE)
CEO/Programmes Director, Afrihealth Optonet Association (AHOA)
Margrieth Nazarit Cortés
Docente Universidad Mariano GálvezInteresting discussion, and to start with, I think that the choice of quantitative or qualitative methods or both is in principle determined by your professional background. The tendency of evaluators coming from professions such as economics, engineering or similar is to use quantitative methods, while evaluators from humanitarian fields use qualitative methods. The evaluation of programmes must overcome these tendencies and effectively use mixed methods in order to demonstrate changes and progress made by programmes from both approaches.
One challenge, for example, is the design of indicators that are more qualitative, which are more difficult to construct, but are increasingly necessary as they allow information to be obtained related to the feelings and non-quantifiable effects of and on the population participating in the programmes.
[Translated from Spanish]