Jillian [user:field_middlename] Lenne

Jillian Lenne

Independent consultant
United Kingdom

I have been involved in evaluation of research quality for the past 12 year (research papers as an editor also of projects and programs

My contributions

    • I would like to contribute the following paper to the discussion. It was recently published by Outlook on Agriculture in a Special Issue on Agroecology:

      “Measuring agroecology and its performance: An overview and critical discussion of existing tools and approaches”

      Matthias S Geck, Mary Crossland and Christine Lamanna 

      Outlook on Agriculture 52:349-359



      Agricultural and food systems (AFSs) are inherently multifunctional, representing a major driver for global crises but at the same time representing a huge potential for addressing multiple challenges simultaneously and contributing systemically to the achievement of sustainable development goals. Current performance metrics for AFS often fail to take this multifunc-tionality into account, focusing disproportionately on productivity and profitability, thereby excluding “externalities,” that is, key environmental and social values created by AFS. Agroecology is increasingly being recognized as a promising approach for AFS sustainability, due to its holistic and transformative nature. This growing interest in and commitment to agroecology by diverse actors implies a need for harmonized approaches to determine when a practice, project, investment, or policy can be considered agroecological, as well as approaches that ensure the multiple economic, environmental, and social values created by AFS are appropriately captured, hence creating a level playing field for comparing agroecology to alternatives. In this contribution to the special issue on agroecology, we present an overview of existing tools and frameworks for defining and measuring agroecology and its performance and critically discuss their limitations. We identify several deficiencies, including a shortage of approaches that allow for measuring agroecology and its performance on landscape and food system scale, and the use of standardized indicators for measuring agroecology integration, despite its context-specificity. These insights highlight the need for assessments focused on these overlooked scales and research on how best to reconcile the need for globally comparable approaches with assessing agroecology in a locally relevant manner. Lastly, we outline ongoing initiatives on behalf of the Agroecology Transformative Partnership that aim to overcome these shortcomings and offer a promising avenue for working toward harmonization of approaches. All readers are invited to contribute to these collaborative efforts in line with the agroecology principle of participation and co-creation of knowledge.

    • Thanks, Seda for your important question. As the Guidelines state several times, they were informed by the International Development Research Centre (IDRC) RQ+ Assessment Instrument (www.idrc.ca/RQplus). Hence some useful ideas and suggestions from a development organization are an integral part of the Guidelines.

      Perhaps the easiest way to answer your question is to use Table 7 on Pg. 19 Qualitative data themes, indicators per Quality of Science dimension with assessment criteria. This Table was developed for evaluating CGIAR research for development projects. As far as I can see, most of the themes and indicators of quality in a science-based research for development project are just as relevant to evaluating quality in a development project. Under design, as an evaluator I would want to know whether the design was coherent and clear and the methodologies fit the planned interventions. Under inputs, I would be looking at the skill base and diversity of the project team, whether or not the funding available was sufficient to complete the project satisfactorily and whether the capacity building was appropriate for planned activities and would be sufficient to provide sustainability for impact after the project finished. Under processes, my main questions would be the recognition and inclusiveness of partnerships, whether the roles and responsibilities were well-defined and whether there were any risks or negative consequences that I should be aware of. Finally under outputs, I would be interested in whether the communication methods and tools were adequate, whether planned networking included engagement of appropriate and needed stakeholders, whether the project was sufficiently aware if the enabling environment was conducive to the success of the project , where relevant – were links being made with policy makers, and whether scaling readiness was part of stakeholder engagement.

      Section 4 of the Guidelines on the Key Steps in Evaluating Quality of Science in research for development proposes methods which are also relevant to development projects. These include review of documents, interviews, focus group discussions, social networking analysis, the Theory of Change and the use of rubrics to reduce subjectivity when using qualitative indictors. The use of rubrics is a cornerstone of the IDRC RQ+ Assessment Instrument.

      1. Do you think the Guidelines respond to the challenges of evaluating quality of science and research in process and performance evaluations?

      Having been involved in evaluating CGIAR program and project proposals as well as program performance over the past decade, I have used an evolving range of frameworks and guidelines.  For the 2015 Phase I CRP evaluations, we used a modified version of the OECD-DAC framework including the criteria relevance/coherence, effectiveness and impact and sustainability. The lack of a quality of science criterion in the OECD-DAC framework was addressed but evaluated without designated elements or dimensions. Partnerships were evaluated as cross-cutting and evaluation of governance and management were not directly linked to the evaluation of quality of science. For the 2020 Phase II CRP evaluative reviews, we used the QoR4D Frame of Reference with the elements relevance, credibility, legitimacy and effectiveness together with three dimensions inputs, processes and outputs.  Quality of science was firmly anchored in the elements credibility and legitimacy and all three dimensions had well-defined indicators. During the 2020 review process, the lack of a design dimension was highlighted in regard to its importance in evaluating coherence and methodological integrity and fitness as well as the comparative advantage of CGIAR to address both global and regional problems.  

      The beta version of the Evaluation Guidelines encapsulates all of these valuable lessons learnt from a decade of evaluations and, in this respect, it responds to the challenges of evaluating quality of science and research in process and performance evaluations. During its development, it has also consulted with other evaluation frameworks and guidelines to gain greater understanding of evaluation of both research and development activities. Due to this, it is flexible and adaptable and thus useful and usable by research for development organizations, research institutes and development agencies.

      Recently, the Evaluation Guidelines were used retrospectively to revisit the evaluative reviews of 2020 Phase II CRPs with a greater understanding of qualitative indicators in four dimensions. Application of the Guidelines provided greater clarity of the findings and enhanced the ability to synthesize important issues across the entire CRP portfolio.

      1. Are four dimensions clear and useful to break down during evaluative inquiry (Research Design, Inputs, Processes, and Outputs)? (see section 3.1)

      The four dimensions are clear and useful especially if accompanied by designated criteria with well-defined indicators. They are amenable to a mixed methods evaluation approach using both quantitative and qualitative indicators. In addition, they provide the flexibility to use the Guidelines at different stages of the research cycle from proposal stage where design, inputs and planned processes would be evaluated to mid-term and project completion stages where outputs would then become more important.

      1. Would a designated quality of science (QoS) evaluation criterion capture the essence of research and development (section 3.1)?

      From my own use of the quality of science criterion with intrinsic elements of credibility (robust research findings and sound sources of knowledge) and legitimacy (fair and ethical research processes and recognition of partners), it captures the essence of research and research for development. Whether it will capture the essence of development alone will depend on the importance of science to the development context.

    • The Outcome to Impact Case Reviews (OICRs) which were part of the 2020 CRP Reviews should be expanded as an integral part of future evaluations of initiatives of One CGIAR. They provide an efficient way of combining quantitative, bibliometric and qualitative assessments of quality of agricultural agricultural for development.

    • Evaluation of quality of research for development

      Having been involved in reviewing and evaluating agricultural research for development projects and programs for several decades, I would like to share some observations.

      Value of bibliometrics and almetrics

      In spite of some of the negative coverage of bibliometrics in current literature, they have an important function in evaluating the quality of published agriculturel for development research papers. Published papers already have passed a high quality threshold as they have been peer-reviewed by experienced scientists. Most international journals have rejection rates of over 90% - only the highest quality papers are published.  Bibliometrics provide a means to further assess quality through number of citations, journal Impact factor (IF), quartile ranking and h-indices of authors among other bibliometrics. Citations and h-indices reflect the quality of the published research within the scientific community. Altmetrics demonstrate interest in the paper among the authors’ peer group. The recent publication by Runzel et al (2021) clearly illustrates how combinations of bibliometrics and altmetrics can be successfully used to evaluate the quality of almost 5000 papers published by the CGIAR Research Programs during 2017-2020. The Technical Note – Bibliometric analysis to evaluate quality of science in the context of the One CGIAR greatly expands the number of potential bibliometrics that could be used to evaluate quality.

      Are there alternatives to citations and IF? The giant scientific publishing companies such as Elsevier use citations and IFs to monitor the quality of their journals. Higher IF translates into higher sales of journal subscriptions. As such companies own most of the scientific journals, any alternatives would need to be endorsed by them – this is unlikely as they seem to be happy with the status quo. Currently there do not appear to be any recognized alternatives. A recent paper by Slafer and Savin (2020) notes that the quality of a journal (IF) as a proxy for the likely impact of a paper is acceptable when the focus of the evaluation is on recently published papers.

      Importance of qualitative indicators

      Qualitative indicators of research quality are just as important as bibliometrics and other quantitative indicators and should always be used alongside bibliometrics. The 2020 evaluations of the CGIAR Research Programs (https://cas.cgiar.org/publications) effectively used a range of qualitative indicators to evaluate inputs, processes and outputs under the umbrella of the Quality of Research for Development Framework using the assessment elements: relevance, credibility, legitimacy and effectiveness.

      IDRC recently revised its quality of research assessment – firmly anchored in qualitative assessment – to more effectively assess quality in a development context (IDRC, 2017). Of interest is the move to use indicators that look at positioning for use. IDRC has successfully used the RQ+ Instrument to evaluate 170 research studies (McClean and Sen, 2019).  

      Subjectivity in qualitative evaluation cannot be eliminated but it can be reduced by employing a team of evaluators and by better defining the criteria, indicators and descriptions.  

      Scientists often raise the issue that they are most interested in the impact of their research rather than its qualitative assessment. Evaluation of effectiveness in the context of positioning for use allows assessment of potential impact through indicators such as stakeholder engagement, gender integration, networking and links with policy makers.

      Integrating quantitative (including bibliometrics) and qualitative indicators

      The on-going development and refining of quantitative and qualitative indicators provides the potential to integrate them to provide more comprehensive evaluation of quality of research for development. This is an exciting area for future evaluations.


      IDRC (2017) Towards Research Excellence for Development: The Research Quality Plus Assessment Instrument. Ottawa, Canada. <https://www.idrc.ca/sites/default/files/sp/Documents%20EN/idrc_rq_asses…;

      McClean R. K. D. and Sen K. (2019) Making a difference in the real world? A meta-analysis of the quality of use-oriented research using the Research Quality Plus approach. Research Evaluation 28: 123-135.

      Runzel M., Sarfatti P. and Negroustoueva S. (2021) Evaluating quality of science in CGIAR research programs: Use of bibliometrics. Outlook on Agriculture 50: 130-140.

      Slafer G. and Savin R. (2020) Should the impact factor of the year of publication or the last available one be used when evaluating scientists? Spanish Journal of Agricultural Research 18: 10pgs.

      Jill Lenné

      Editor in Chief, Outlook on Agriculture and Independent Consultant