Evaluation Methods and Limitations

LOCATION AND GEOGRAPHIC SCOPE

The evaluation covered Regions V (Bicol), VI (Western Visayas), and a small portion of BARMM (Cotabato, Maguindanao Division I in BARMM), in which ABC+ is being implemented. The evaluation used the division, district, and school levels in its sampling frame to purposively select informants using ABC+ participant lists.

EVALUATION DESIGN

Anchored on the theory of change, the evaluation used a theory-based, mixed-methods design to assess ABC+’s performance. The evaluation methodology used concurrent mixed methods3 to better understand the implementation of ABC+ through qualitative and quantitative assessment inquiries. The selection of key informant interviews (KIIs) and focus group discussion (FGD) participants was purposive, while the survey respondents were randomly sampled. The minimum acceptable response rate for the adapted survey was 50 percent (Babbie, 1990).4

Although the ABC+ activity’s objectives and design did not change with the onset of COVID-19, the pandemic significantly altered the context within which it was implemented. To capture unintended outcomes, the evaluation used Outcome Harvesting (OH) to uncover unexpected results and contributions not envisioned in the theory of change.5

DATA COLLECTION

In this evaluation, mixed methods were used to analyze the implementation of the ABC+ interventions. The quantitative method used an online survey, and qualitative methods included literature reviews, KIIs, FGDs, OH workshops, and the Most Significant Change Technique (MSCT). Using multiple data sources through mixed methods enabled the evaluation to cover a broad spectrum, conduct a more in-depth analysis, and achieve a more holistic understanding of data through triangulation (Yin, 2003). Methods were conducted both virtually and in person. The sequence of data collection was as follows: desk review, survey, KIIs, FGDs, OH workshops, MSCT, and evaluation of supplementary materials. The evaluation team also conducted a validation workshop on the accuracy of the evaluation information.

The evaluation used descriptive statistics and analytic data visualizations for the quantitative data and disaggregated data by sex and location to the extent possible. This approach is consistent with USAID’s embedded monitoring and evaluation in the Program Cycle. Data from the survey were analyzed through descriptive statistics. Analysis of progress of ABC+ toward indicators derived from ABC+ quarterly/annual reports. However, the absence of needed data during the scheduled time for analyses precluded: 1) comparison of division and school-level data at the start and midline; and 2) determination of the effect size of interventions by cohort, divisions, sex, and location. Classroom observation was also not done due to the COVID-19 pandemic restrictions imposed by DepEd. Other survey items are found in Annex 13.

The qualitative data were recorded and transcribed in worksheets. They were subjected to content analysis using computer-assisted qualitative data analysis software (CAQDAS). The evaluation team coded the responses and computed the estimated reliability index. A constant comparison method complemented the index where the crew checked the coherence of responses, codes, and themes to ensure that the results had no external threats to the audience of this performance evaluation. The team also used content analysis on the outcomes harvested. Qualitative data analysis was used for FGD, KII, MSCT, and OH data to find emerging themes.

Computer software, such as SPSS and NVivo, was used for data processing and analysis in quantitative and qualitative studies. The codes used and reliability results are found in Annex 14.

The evaluation team combined and compared the quantitative and qualitative analysis results for a holistic understanding of ABC+ concerning relevance, effectiveness, and sustainability.

The design matrix in Table 1 summarizes the evaluation design and methods. The details of the evaluation methodology, such as the data collection and its corresponding participants, data management, and ethical consideration, can be found in Annex 18.

VALIDATION

The validation workshop was conducted via Zoom. Representatives from Regions V, VI, and selected school districts in BARMM participated in the workshop held on February 24, 2023. The ABC+ evaluation findings, conclusions, and recommendations were presented, and participant feedback was collected.

The validation workshop used a tool that consists of 34 items that cover statements capturing the results and recommendations under relevance (with 11 words), effectiveness (with 11 statements), and sustainability (with 12 comments). It was sent to the participants two weeks before the evaluation workshop together with the presentation of the findings, to give time for the participants to review and evaluate the results. Interviews and narratives from the participants supplemented this tool.

There were unexpected technical glitches (i.e., difficulty connecting to the internet) and overlapping activities by the attendees during the virtual validation (i.e., ABC+ workshop and last-minute DepEd seminar simultaneously happening with the validation workshop). As a result, some attendees were either unable to attend or were distracted during the process.

A validation survey was also conducted, which yielded 65 respondents, and this regional breakdown: Region V – 17, Region VI – 40, and BARMM – 8.

Overall, albeit using a small number of informants, the validation survey results generally confirm the midline evaluation findings and recommendations for ABC+. Across regions, respondents’ answers confirm the veracity of the presented findings and support the given recommendations. While some responses were received for statements that disconfirm findings (i.e., not true in our locale), this is a very small number. It may be an indication though that program implementation has not covered the entirety of a target location. The validation survey results, combined with the data gathered from the conducted online validation workshop, confirm the midline evaluation findings for ABC+.

The documentation of the workshop and the survey can be found in Annex 20.

KNOWN LIMITATIONS TO THE EVALUATION DESIGN

This performance evaluation has four potential limitations: reduced sample size, inaccessible sites, ability to recall information for self-reported data and limited access to documents.

First, some respondents selected for the KII and FGD could not participate due to prior commitments, health reasons/COVID-19, weak internet connectivity, and power outages. This resulted in a reduced number of actual versus expected participants. Second, some study sites were inaccessible due to the distance from the regional or district offices (i.e., limited areas were visited in BARMM compared to those in the city proper of Cotabato). Third, the self-reported data, the survey, KIIs, and FGDs depended on what people could remember. Lastly, accessing documents where ABC+ contact persons were no longer connected with their schools or transferred to other schools or offices took much work.

Evaluation Design Metrix