Beyond the Airtime: Using Rapid Mixed-Methods Assessment to Tune into Complex Change

By Morris Wenareeba | Care International in Uganda | CASCADE Project

 


Illustrations: Ivana Čobejová

The Context & MEL Challenge

Care International in Uganda’s CASCADE project utilized a radio drama, “Amagezi Itungo,” (Knowledge is Wealth), to promote healthy diets and resilience among women and children.  To ensure maximum accessibility, the drama was broadcast three times a week, twice a day, in six local languages, Karamajong, Madi, Acholi, Lango, Rutooro, and Lusoga,  across five diverse regions in Uganda: Karamoja, Acholi, Lango, Busoga, and Tooro. This coverage spanned 15 districts: Adjumani, Gulu, Nwoya, Kitgum, Lamwo, Lira, Kyenjojo, Kyegegwa, Kamwenge, Kabarole, Kamuli, Abim, Moroto, Kotido, and Napak.

The knotty challenge was: how do we know if anyone is listening? More importantly, how do we measure if a passive media intervention is actually changing knowledge and practices in such a complex setting? Traditional surveys felt too blunt to capture the real story.


Key Learnings / Insights in Action
  • Disaggregation is non-negotiable. Our overall 29% awareness rate masked a critical reality: listenership was 91% in Busoga but only 7% in Acholi. A single ‘average’ number would have been dangerously misleading for decision-making.
  • The ‘mechanism’ can work even if the ‘reach’ fails. Our method enables us to separate the effectiveness of the content from its delivery. Among listeners, over 90% could accurately recall and explain specific practices promoted in the drama, not just general concepts, but also detailed applications, such as ‘preparing seasonal calendars before planting’ and ‘mixing chicken and goat waste to make manure.’ The listeners’ ability to articulate these specific, actionable practices, combined with 74% reporting that community members had implemented changes, suggests strong message retention and influence. The drama worked, but it wasn’t reaching our goal of 1.2 million women of reproductive age across the project areas.
  • Qualitative data unlocks the “why” behind the numbers. We didn’t just find that people changed; we found out how. Listeners related to the characters' struggles and saw them succeed by working together. This motivated them to copy those exact solutions, leading them to start vegetable gardens and improve household cooperation. We also learned why they couldn’t, identifying key barriers such as the need for seeds and financial support.
  • Look for unplanned ‘ripple effects’. We found that 58% of listeners discussed the new practices with others. We call this a ripple because the message traveled further than the radio signal itself. We often view radio as a one-way channel where people listen silently. This data proved otherwise. It showed us that the community system relies on peer-to-peer sharing to validate and spread new ideas. The radio sparked the conversation, but the community kept it going.

Rapid Integrated Assessment Methodology

Care International in Uganda designed a methodology to  integrate quantitative and qualitative data into a single, efficient survey instrument rather than as separate, time-consuming steps. Here’s how it works: We used Lot Quality Assurance Sampling (LQAS) methodology, a rapid and statistically rigorous sampling technique, to obtain fast and reliable data on program coverage and knowledge across 18 different districts. (This involved 572 households, with approximately 32 households per district). But here’s the twist: we embedded open-ended, qualitative questions directly into this quantitative survey.This differs from traditional MEL. A standard survey tells you what (e.g., “74% changed practices”). Our hybrid approach immediately told us:

  • How they changed: ‘they started gardens,’ ‘applied manure,’ ‘worked together as husband and wife’ 
  • Why they changed: (Inferred from lessons learned) ‘the drama showed us cooperation brings results,’ 'his friend advised him to attend CARE trainings’ 
  • So what: This suggested household nutrition was improving, and farming practices were shifting.
  • Now what: ‘but they need seeds, fertilizer, and ongoing training to sustain these changes’

By capturing what, how, why, so what, and now what in a single instrument, data collection becomes inherently adaptive.


📝 What we learned

We deployed this mixed-methods tool across 572 households, with a primary focus on our target audience: women and caregivers. The process was revealing. In one district, our quantitative LQAS data would show high awareness. In another, it would be in the single digits. This immediately flagged major differences in program reach and helped us identify where to investigate further.

However, the real magic occurred when we examined the qualitative data. Our quantitative finding that 96% of listeners understood “record-keeping” was brought to life by their own words: they could now “identify the expenses made and the profits earned”. The 74% who reported practice changes weren’t just ticking a box; they were “working together as husband and wife” and “growing leafy green vegetables”.

We also uncovered the obstacles that stood in their way. Listeners told us they understood the lessons, but often lacked the resources to put them into action. They specifically requested “modified seeds,” “fertilizer,” and “capital” to boost their farming. This nuance was vital. It allowed us to distinguish between a lack of knowledge and a simple lack of inputs.

The direct result was that we could give the program team precise, actionable feedback. We didn’t just say, “Improve reach.” We could say, “The content is excellent and driving change in Busoga, where reach was 91%. But in Acholi, where only 7% were reached, the delivery system is failing. Our qualitative data suggested this wasn’t a content problem; those who listened still showed high comprehension, but it was likely due to a mix of factors like broadcast partner, signal strength, or promotion, which required further investigation.” 

An unexpected outcome was discovering the power of drama to spark conversation. Discovering that 58% of listeners shared the information indicated that the intervention had a secondary diffusion effect, promoting community-level dialogue. This insight is now central to our understanding of measuring influence.


🎯 The “Aha!” Moment: What Did You Learn About MEL in Complexity?

Our biggest ‘Aha!’ moment was learning to decouple the efficacy of the mechanism from the reach of the intervention. In a complicated system, these might track together. But in a complex system, with multiple interacting factors and unpredictable feedback loops, they can diverge dramatically.

We often assume these metrics move together. We think that if reach is low, the program effectiveness must be low. This assumption fails in a complex environment. An intervention can be highly effective for those who receive it but still fail to reach a wide audience due to logistical friction. We needed to measure these two elements separately to understand the true performance of the project.

Our method proved that the radio drama itself was a highly effective behavior change tool (high comprehension, high practice change among listeners). The delivery system (radio station choice, timing, signal strength) was the part that failed in some areas. A traditional impact evaluation might have just averaged everything out (29% overall awareness) and incorrectly concluded ‘the drama had a modest effect,’ missing the critical insight that we had an excellent intervention with a broken delivery system in specific locations. This approach enabled us to examine the system’s various components and make adjustments.

This distinction prevented us from making a costly mistake. If we had relied on the average numbers, we might have scrapped the entire program as ineffective. Instead, we knew exactly what to do. We retained the content that was working and focused our efforts on resolving the broadcast agreements and promoting it in the specific districts where it was failing.

My advice for practitioners is this: In complex environments, stop asking only “Did it work?”. Use hybrid methods to ask, “To what extent is it working, how is it working, why is it working (or not working), for whom, in what context, and what else is happening?” This gives you the insights to adapt, not just to judge.