• SOW Development

    Assess an activity’s evaluability before making decisions on whether to conduct an evaluation.

    Evaluability assessments can be conducted at any point during an activity’s period of performance, including at start-up when the implementing partner is developing its work plan, indicators, and activity monitoring and evaluation plan. The evaluability assessment can help the Agreement/Contract Officer’s Representative (A/COR) and implementing partner determine ways to structure implementation in a manner that increases its eventual evaluability, design pilots that can be rapidly evaluated to provide timely evidence for adaptation or scaling up of promising interventions, and suggest the types of evaluation questions that may be answered later in the activity. Evaluability assessments are also particularly important for midterm performance evaluations because they will provide the information that the A/COR and evaluation team need to define answerable evaluation questions.

    Technical offices and A/CORs should be prepared to actively engage with the evaluation team on the purpose and scope of an evaluation.

    This engagement will help the evaluation team develop the questions and methodology that will provide the A/COR with the evidence they need for their adaptive decision making and future program design.

    When developing a SOW, start with a learning question, rather than specific evaluation criteria or questions.

    The evaluation team can then work with the A/COR to develop evaluation questions that will provide evidence to answer the broader learning question. It is often useful to begin the process by examining the activity’s theory of change (TOC) and determining what evidence is needed to inform key causal hypotheses.

    Focus on the development outcome of interventions (how and why did intervention X matter in achieving the development outcome), not on the activity’s performance (did the activity do X or Y).

    The activity’s performance should be included in the evaluation, but it can be easily summarized through an implementing partner’s reports. However, in some cases the evaluation questions can focus on validating performance, particularly for midterm evaluations.

    More tightly focused evaluation questions will yield more useful and rigorous evaluation results.

    Many evaluations fail to provide actionable evidence because they seek to do too many things in too little time. Focusing on one or two aspects of an activity’s implementation is more likely to yield useful results, particularly if the evaluation results are time-sensitive, such as for adaptive management decisions and new activity designs that are needed in the near-term. The more ambitious and wide-ranging the evaluation questions, the more time and money will be needed to provide useful results, and the more challenging it will be for an evaluation team to provide useful in-depth analysis.

    A/CORs should carefully review the draft SOW and discuss any concerns with the Program Office and evaluation team before the evaluation begins.

    This will prevent any misunderstandings later in the evaluation process and ensure that the A/COR is clear on what the evaluation will do and what types of evidence it will provide.

  • Evaluation Team Composition

    • Local evaluation teams are ideal, but at least one team member, and preferably the team leader, needs USAID evaluation experience or experience conducting evaluations similar to those done by USAID, and preferably of projects similar to the one that is being evaluated. If this is not possible, consider including a third country or U.S. national with USAID evaluation experience on the team. If this also is not possible, the external evaluation team should have access to an experienced USAID evaluator for consultations and guidance.
    • When selecting the evaluation team, be sure to include contingency plans for each team member. Evaluation team members may fall ill, not perform adequately, or have other problems that prevent them from completing the evaluation. Having back-up options on hand will prevent delays in completing the evaluation.
    • Experienced evaluators can work across sectors, even in sectors where they do not have specialized expertise.
    • The evaluation team should be balanced between evaluation experts and sector experts. The evaluation experts will help to ensure that evaluation provides rigorous findings and conclusions. The sector experts will ensure that the evaluation results fully reflect the activity’s context and the technical aspects.
    • Include a data analyst with expertise in both quantitative and qualitative skills on the team. One of the most common problems in evaluation is after field work, when evaluation teams members are found to lack the capacity to synthesize, organize, and analyze the data that they collected. A data analyst can begin this work during fieldwork by providing the team with early read-outs on the evidence. The analyst can also support storyboarding sessions for the evaluation report by sorting through the data to provide the team with specific evidence they need to support emerging findings.
    • Include a junior-level evaluation assistant on the evaluation team. This not only builds local capacity for future evaluations, but the assistant can also be a second notetaker during key informant interviews and organize the team’s notes for data analysis.
  • Data Collection and Fieldwork

    • COVID-19 taught the world that much can be accomplished remotely, evaluation included. Consider whether and how to use remote data collection and fieldwork when budgets are tight. Remote data collection at times is more effective, such as when setting up interviews with national government officials who may have little time for an in-person interview but are willing to schedule a phone call or video conference.
    • When conducting fieldwork in areas unfamiliar to the evaluation team, invest in local field assistants who know key stakeholders and who can perform follow-up work with key informants when needed.
  • Analysis Process and Storyboarding

    Use an iterative, building block process for analysis and storyboarding. Do not wait until all fieldwork and data collection is complete to begin the analysis.

    • Do a quick review of the data collected each week to illuminate emerging findings and identify gaps in the evidence.
    • The evaluation team member responsible for data analysis should organize, collate, and synthesize all the data collected and present them to the team immediately after fieldwork is complete, or as soon as possible while the fieldwork is still fresh in the minds of the team.
    • Hold a storyboarding workshop as soon as possible after the findings and conclusions workshop to create the report narrative in a detailed outline format.
    • Hold a storyboarding workshop as soon as possible after the findings and conclusions workshop to create the report narrative in a detailed outline format.
    • Hold a validation workshop with the implementing partner, A/COR, stakeholders, and beneficiaries to reality check and contextualize the findings and conclusions and co-develop recommendations.
  • Report Writing

    • Report writing will take more time than envisioned in the SOW. Plan for that.
    • One team member should be the lead writer. Generally, this will be the team leader. However, if the team leader does not have strong English language writing skills, another team member can lead the writing. In this case, the team leader should maintain primary responsibility, ensuring the written report accurately reflects the evidence, findings, conclusions, and recommendations and that the report well written, well structured, and is in a narrative format.
    • Does the report explicitly answer the evaluation questions? If not, restructure the report so the answers to the evaluation questions are clear to the reader. Evaluation reports should be structured to provide findings and conclusions that directly answer the evaluation questions.