Overview

CLA’s foundation is built on a base of rigorously collected, unbiased, and robust evidence that USAID and its stakeholders can use to make adaptive management decisions. CLAimDev helped build this base through its work on evaluation and other research. This section of the report discusses lessons learned, best practices, and recommendations that come from CLAimDev’s experience building the evidence base and summarizes the evaluations and research that CLAimDev conducted.

Number of evaluations and other research

  • STRIDE Evaluation

    NOVEMBER-DECEMBER 2021
    Metro Manila, Laguna, Bulacan, Pampanga, Tarlac, Cebu City, Davao City, Cagayan de Oro City, Iligan City

  • E-Peso Evaluation (Remote KII)

    NOVEMBER – DECEMBER 2021
    Metro Manila, Philippines and USA

  • ABC+ Evaluation

    SEPTEMBER – OCTOBER 2022
    Metro Manila (DepED Central), Region 5 (Masbate City, Legazpi City, ALbay, Iriga City, Camarines Sure, Sorsogon City, Tabaco City, Camarines Norte), Region 6 (Negros Occidental, Antique, Capiz, Iloilo City, Escalante City, Roxas City), BARMM (Cotabato City, SGA, Maguindanao)

  • SURGE Evaluation

    NOVEMBER – DECEMBER 2021
    Puerto Princesa City, Palawan; Tagbilaran, Bohol; Iloilo City; Cagayan de Oro City; Marawi City

  • MRP Evaluation

    Iligan City, Baloi, Marawi City, Saguiaran, Masiu, Ditsaan Ramain, Balindong, Marantao, Bubong, Butig, CDO

* 48 = 17 KII + 31 Focus Group Interviews
(used KII questionnaires)

Insights from the meta-evaluation

CLAimDev produced a meta-evaluation summary report to capture lessons from the project’s evaluation experience and, more broadly, to generate findings and lessons that could apply to other USAID evaluations, strengthening USAID’s development effectiveness.

CLAimDev conducted a total of seven evaluation studies between April 2021 and June 2023 under the guidance and leadership of USAID/Philippines PRM Office, in close collaboration with the relevant technical offices. The meta-evaluation report summarized lessons from these, with reference to the following two questions: 1) What has been learned from the evaluation process in terms of design and management? 2) What were the common themes in the findings and conclusions across the various evaluations?

The meta-evaluation presented key observations on the following:

  • Findings on relevance

  • Findings on effectiveness

  • Findings on sustainability

  • Findings on the integration of gender equality and social inclusion (GESI)

  • Findings on capacity building

  • Feedback on evaluation utilization

  • Lessons on dissemination and learning

  • Lessons on conducting evaluations

The review on relevance, effectiveness, and sustainability noted the following facilitating factors as being critical: 1) policy alignment with government and other development partners; 2) building relationships and trust with key counterparts; 3) flexibility and the use adaptive management (especially during the COVID-19 pandemic); 4) being responsive to the needs of counterpart personnel and beneficiaries; 5) beneficiary engagement; 6) implementation collaboration and partnership; and 7) GESI integration.

Some of the factors that hindered effectiveness included: 1) policy differences between industry and academe; 2) limited information flow among key players; 3) bureaucratic procurement processes, including USAID’s; 4) funding constraints; 5) performance indicator management systems that did not adequately capture key aspects of activity implementation and progress; and 6) changing political dynamics resulting from the local and national elections.

Overall, the AORs had very positive feedback regarding the process of conducting evaluations as well as evaluation quality, especially for the learning and dissemination events. Most AORs also indicated the evaluations will be useful for informing the design of future strategies and activities.

Prioritizing evaluation use and dissemination was an integral part of each evaluation. Learning events provided a platform for presenting the key findings of the evaluations to relevant stakeholders, engaged participants in discussions, gathered insights, and encouraged the uptake and application of evaluation results. Knowledge and communication products complemented and added value to the learning events, enabling a larger audience to access the results.

Lessons on conducting evaluations included: 1) There were advantages to incorporating remote data collection into the evaluation process, which was initially necessitated by travel and meeting restrictions related to the COVID-19 pandemic created cost savings that made it possible for CLAimDev to experiment with new types of learning events and knowledge and communications products; 2) Clarifying and developing indicators are useful to activity management and provide information on progress toward accomplishments (beyond a focus on standard indicators which are not always relevant to project outcomes and accomplishments); 3) There is a need to consider the timing of an evaluation relative to the stage of implementation (mid-term and/or end-of project); and 4) Implementing learning and dissemination events is important for broader engagement with the key stakeholders and beneficiaries.

Reflections: Dissemination and Learning