Expanding the Evaluator’s Toolbox: From Determining to Developing Value in Systems Change

By Drew Koleros (Mathematica | dkoleros@mathematica-mpr.com) and Michael Moses (Itad | michael.moses@itadinc.com)


Illustrations: Ivana Čobejová

  • As systems change evaluators, our role has evolved to navigating complex systems rather than simply measuring against predetermined criteria.
  • The urgent need for evaluators to help stakeholders actively develop value by navigating uncertainty, fostering learning, and adapting strategies in real time is driven by the dynamic nature of global formal and informal institutions.
  • Evaluators' expanded role necessitates 1) expanding technical approaches, 2) strengthening facilitation skills, and 3) incorporating varied voices—all of which are important aspects of pivoting towards developing, rather than presuming to determine, value.

As systems change evaluators,  we've had firsthand experience supporting philanthropy and foundation strategies and portfolio programs aimed at addressing wicked problems — from safe, affordable housing to better health outcomes for vulnerable groups. We’ve learned that more traditional evaluation approaches often fall short when tackling the dynamic and emergent nature of complex systems change. In their 2021 publication Evaluating and Valuing in Social Research, Thomas Schwandt and Emily Gates explore this issue through concepts of values, valuing, and evaluating in social science research and how the notions of “what to value”, “how to value”, and “who values” begin to shift when evaluators move from evaluating simple, predictable situations—like assessing a direct service program with clear outcomes—to complex issues such as strengthening civic engagement across multiple countries or reducing corruption in fragmented political systems.

While much has been written about more technical aspects of systems change evaluation, from developing theories of change that help to unpack complexity to selecting and mixing complexity-aware methods, Schwandt and Gates contribute to this evolving debate by lifting up how the role of evaluators’ positionality in an evaluation must shift when moving from evaluations of more simple and predictable situations to more complex situations. In the former, the evaluator is positioned as the external expert and uses their expertise to determine the value of an intervention. This determination is based on some set of pre-determined and agreed-upon criteria for what an intervention’s success looks like, usually tied to the degree to which an intervention produces a predictable set of intended outcomes. 

When evaluators engage in more complex situations, cause-and-effect relationships are less certain, and outcomes cannot be predicted in advance. In these situations, there is often not only disagreement on how to achieve desired outcomes, but also around what the desired outcomes should be when multiple stakeholders hold differing and sometimes conflicting perspectives in fluid and dynamic contexts. In these cases, agreeing on predetermined criteria to guide an evaluative judgement may not be possible given the wide range of perspectives— and might not be the most useful support an evaluator can provide. Due to the dynamic nature of these complex situations, looking backward and understanding “what happened” may be less important than an evaluator developing value by helping to  amplify and make sense of multiple perspectives about the role of an intervention in an observed change – whether that be changes in the practices of individuals or organizations, institutional changes or wider policy changes, and helping concerned groups think through “what next” by exploring multiple potential future scenarios for how the system might respond to change.

This framing resonates with our experiences as systems change evaluators: how our roles in evaluations have shifted over time as we work more in complex systems, and how we believe the broader field of evaluation should continue to shift in the future. And it feels particularly urgent now, as formal and informal institutions worldwide are rapidly changing, driving systems change evaluators to move beyond determining the value of interventions based on predetermined criteria to actively developing value by helping stakeholders navigate uncertainty, learn from multiple perspectives, and adapt strategies in real time.

Drawing from our work with foundations like the MacArthur Foundation and the Robert Wood Johnson Foundation, we've identified three key areas where evaluators can expand their practice to better serve complex systems change efforts. This post shares practical lessons about 1) expanding technical approaches, 2) strengthening facilitation skills, and 3) incorporating varied voices—all of which are important aspects of pivoting towards developing, rather than presuming to determine, value.

Expanding our evaluation toolbox to meet the complexity of the situation
What this means:

Florencia Guerzovich and Tom Aston remind us that much of what we "know" is incomplete—fragments of larger, evolving stories that we do not fully understand. This is especially true in complex systems, where causality is murky, countless variables interact, and dynamics constantly shift. For systems change evaluators, the goal in this context can’t be to achieve certainty—instead, we must equip ourselves and our partners with tools and approaches that enable us to navigate and learn from complexity. Rather than relying solely on traditional methods that work well for predictable interventions, we need approaches that embrace uncertainty. This bricolage approach requires integrating storytelling alongside efforts to quantify change to capture the richness of systems change efforts.


What it looks like in practice:

Our work supporting the MacArthur Foundation's Big Bet On Nigeria Program illustrates this kind of approach. The program has worked to strengthen social accountability and reduce corruption at multiple levels—from grassroots community organizations to state-level institutions—in a highly volatile political environment. A traditional retrospective evaluation asking "Did this work?" wouldn't provide the real-time insights needed to deliver ongoing learning, strategy refinement, and improvement.

Instead, over 4 and half years we've adapted a range of more complexity-aware evaluation approaches—social network analysis, outcome harvesting, and most significant change—to generate evidence on high-priority learning questions in as close to real-time as possible, creating more than 20 discrete learning products. In almost every instance, we've engaged both the donor and grantees in participatory sensemaking, working together to refine findings, co-create conclusions, and decide on next steps.


How it develops value:

By expanding our evaluation toolkit and crafting approaches alongside partners to help them meet the moment, we've been able to use evaluation to produce the action-ready, evidence-based insights they need to better understand the systems they're trying to shape and keep navigating toward change, even when facing emerging obstacles like closing civic space and chronic instability.


Building Facilitation Skills to Enable Collective Learning and Action
What this means:

In traditional evaluations, technical expertise—understanding theory, selecting methods, analyzing data—is essential. In systems change evaluations, technical skills remain necessary but are no longer sufficient. We need to combine “hard” technical capabilities with the ability to facilitate multi-stakeholder processes where different groups share expertise, learn together, and build relationships that underpin successful social change. 

This means becoming skilled at stewarding small and large group conversations, designing action-oriented strategic learning workshops (virtual and in-person), and creating spaces for the hard work of collectively pursuing change in complex systems.


What it looks like in practice:

The Learning to Make All Voices Count Initiative (L-MAVC) taught us this lesson. The program supported six civil society organizations across five countries, all focused on strengthening civic engagement in very different contexts while learning collectively about adaptive management.

As the program's learning partner, we initially showed up to the program’s first strategy workshop with meticulously crafted theories of change and a suite of exciting evaluation methods we thought would be useful. But after an awkwardly quiet first morning where participants struggled to engage, we realized our fancy tools were a distraction. Before we could do the technical work of framing out a learning approach, we needed to create space for project leads—all experts in citizen engagement and advocacy—to build trust and relationships with each other and with us.

We set aside our prepared materials and focused on facilitating connection and relationship-building. Though challenging, this approach eventually enabled us to build a real learning community that worked together over the span of 18 months to develop into a collectively owned program strategy, establish and leverage effective peer learning partnerships to support project implementation, and make sense of and respond to emerging evidence.


How it develops value:

This focus on facilitation and relationships first transforms evaluators from external experts delivering reductive answers to critical friends accompanying partners in their change efforts. Our job is to provide structure and support for building collective understanding, pivoting strategies based on emerging challenges and insights, and navigating conflicting perspectives, which means building the relationships and trust through which lasting change emerges.


Incorporating Multiple Perspectives and Ways of Knowing
What this means:

In simple situations, evaluators engage in single-loop learning—determining if interventions work as intended. In complicated situations, we shift to double-loop learning—working with stakeholders to determine which interventions produce desired outcomes under different conditions. In complex systems, we must recognize that diverse actors may disagree not only on how to achieve outcomes but on what those outcomes should be.

Systems change evaluators must engage in triple-loop learning, surfacing underlying paradigms that shape how different actors define both problems and solutions. This means actively seeking voices beyond traditional evaluation stakeholders and recognizing multiple forms of evidence—from quantitative data to rich narratives based on lived experience.


What it looks like in practice:

Our evaluation of the Robert Wood Johnson Foundation's Transforming Public Health Data Systems Initiative demonstrates this approach. Rather than simply assessing whether funded activities achieved predetermined goals, we engaged multiple system actors to understand how the public health data system was changing from different perspectives.

We started with a literature review to understand change from the perspective of formal research. Through participatory workshops with funded partners, we explored what and who was missing from this initial understanding. Finally, we engaged newly identified system actors, including those working directly with communities most impacted by public health data system problems, to generate further insights. This revealed multiple perspectives on how change happens and identified new ways of catalyzing it, challenging existing paradigms about how best to intervene to shift the system.


How it develops value:

This approach helps shed light on issues often beyond traditional evaluation scope and opens pathways that might otherwise remain hidden. By examining multiple perspectives and forms of evidence, we equip all system actors—including those most affected by systemic problems—with insights they need to make meaningful and lasting change.


📋 Moving Forward: Practical Steps for Evaluators

These three expansions of evaluation practice—embracing technical complexity, strengthening facilitation skills, and incorporating multiple perspectives—represent a fundamental shift from determining to developing value. For evaluators ready to embrace this approach:

  • Start with relationships: Before deploying technical tools, invest time in building trust with partners and stakeholders
  • Embrace uncertainty: Design approaches that prioritize learning and adaptation over definitive (and reductive) answers
  • Seek a broader range of voices: Actively include perspectives beyond traditional evaluation stakeholders
  • Focus on action: Frame evaluation as a tool for navigation and improvement rather than static assessment

As the world continues changing in unexpected ways, systems change evaluators have a crucial role in helping partners navigate complexity, learn from multiple perspectives, and adapt strategies based on emerging evidence. By expanding our toolkits in these three areas, we can better support the collaborative learning and relationship-building that lasting systems change requires.