Monitoring, Evaluation and Learning for Complex Programs in Complex Contexts: Three Facility Case Studies

By Tara Davda and Lavinia Tyrrel

Aid is complex, and it is delivered in complex contexts. Any seasoned development practitioner would agree with this.

For most of us, aid is about supporting positive change, and change necessarily means a renegotiation of power and resources. It means understanding the interests, motivations and incentives of those with a stake in keeping – or changing – the status quo. These interests are often hidden to outsiders and hard to predict until people act or power and agency are exercised.

While there is a growing body of evidence about how to design and implement programs that respond to this complexity (see here, here and here), little has been written on monitoring, evaluating and learning for (MEL) for aid programs working in complex contexts. Further, where evidence does exist, it focuses on single sector projects – not MEL for aid portfolios, which are themselves complex: using a range of modalities, targeting a variety of development problems.

This knowledge gap has specific implications for the high-value, multi-sector ‘Facilities’[1] that Abt Associates manages on behalf of the Australian Department of Foreign Affairs and Trade (DFAT). Specifically: how do we judge the overall performance of each Facility? Is it possible meaningfully to aggregate results arising from such varied portfolios? Just how much ‘contribution’ to a high-level development goal is required and how can such a convincing argument be constructed? Do we possess the skills to monitor progress in real-time and adapt our programs accordingly? And most importantly, to what extent does the use of the project framework hinder or help MEL for Facility-wide performance?

Given Abt is managing three such DFAT Facilities – KOMPAK Indonesia, the Governance Partnership in PNG and the Partnership for Human Development in Timor-Leste – we undertook research into these very questions, using a case-study approach of the three Facilities.

Our key take-away?  Traditional forms of monitoring and evaluation – where the primary focus is on accountability, ex-post learning and evaluation and linear change – do not lend themselves well to the Facility model.

This stems from one simple fact: conventional forms of MEL are based on a largely linear project model, i.e. one which is effective in simple change contexts[2], where there is a clear line of sight among activities, inputs, outputs and outcomes. In complex portfolios working in complex political contexts, where institutional or behavioural change is the underlying goal, this model is less effective. In addition, because the justification for the Facility model includes cost effectiveness and delivery (e.g. see the PNG Investment Design), Monitoring Evaluation and Learning Frameworks (MELFs) must also be able to explain achievements at the portfolio level as well as the individual project level, showing that the whole is greater than the sum of the parts.

Thus, in each case examined, our research found that teams needed to develop their own approaches to try to overcome these challenges. Our paper identifies seven areas where lessons have emerged, or deviation from more ‘traditional’ MELFs was required. They include:

  • Clarifying the Facility’s strategic intent or overall Theory of Change (i.e. not just a Theory of Action[3]) and strategic plan to guide development of a MELF should be done as early in the investment as possible – even it if is only a best guess. For all three Facilities this strategic clarity took time to emerge, but once in place, has been critical in allowing MELFs to be developed;
  • The demands that multisector investments place on MELFs are greater than is addressed in current donor guidance. In the cases we explored, MELFs were required to perform multiple, often competing functions including public and performance accountability, public diplomacy and communication, and evaluation and internal learning. KOMPAK is overcoming this challenge by using different datasets and processes for different audiences (e.g. reporting on a simple set of indicators for the donor while also running regular monitoring, learning and reflection processes for program teams);
  • Telling a contribution story is hard. Explaining the Facility’s achievements, without simply aggregating results up from one level of the project frame to the next, is challenging. However, there are good examples – such as using independent approaches to evaluation (e.g. the Quality and Technical Assurance Group in PNG) or setting qualitative or process level indicators (e.g. for collaboration and responsiveness) coupled with review and reflection sessions;
  • Defining indicators capable of describing how outputs led to Facility outcomes, and how these contributed to the Facility goal, has tested the limits of conventional MELFs. We found that the higher up the hierarchy of the program logic, the harder it was to define, measure and understand change. The PNG program is looking to address this by supporting an annual Governance Update. The Update reflects on the state of governance in PNG, against which the Facility considers its contribution to change;
  • In most donor monitoring systems, baselines = best practice, but the challenge of creating them, and hence describing Facility contribution, has varied according to the availability, accuracy and sophistication of secondary data sources in each country. The Facilities have overcome this by using mixed methods approaches; setting quantitative baselines for particular locations or projects where hard data is available, and then relying on qualitative assessments for other parts of the Facility that are in more adaptive and flexible ways;
  • ‘Learning by doing’ lies at the heart of the TWP and adaptive management agenda, yet, it was one of the most challenging aspects of each Facility’s MELF. In our paper, we identify a need to a) develop systems which embed learning into programming (such as PNG’s use of Strategy Testing), b) overcome incentives to focus on monitoring for output reporting only and c) to collapse design, implementation and monitoring by undertaking these tasks simultaneously, and;
  • Finally, there is a need to rethink the way MEL is resourced. In the case studies we explored, identifying staff who can apply MEL to projects which work in adaptive and politically-informed ways, was challenging. We argue that quarantining budgets for MEL activities and recruiting staff for non-technical traits (e.g. their capacity to promote learning, curiosity and reflection amongst teams) could address this.

Each Facility has had varying levels of success in designing and implementing facility MELFs. The critical lesson is that it is difficult to describe, summarise and plan (an often experimental) portfolio of investments using a linear-change / project framework approach. In the case-study research summarised here, the rigid use of linear-change models served as an impediment to effectively applying MEL to Facilities. Although the project framework serves important accountability purposes, it has sometimes been mistaken for a hard performance benchmark – working actively against more flexible and adaptive forms of program management.

If the international community is serious about transforming how complex programs and complex change are measured – then we think the place to start is not MEL methods, but the logic of the project framework itself.

……………………..

[1] A portfolio of investments, where the components are amalgamated for reasons which usually include cost effectiveness and delivery efficiency, and sometimes include development contribution.

[2] A complex change context is one where the relationship between cause and effect (and hence inputs and outcomes) is very hard to precinct. “While experience and principles from other situations may guide the design and implementation of such work, it is often the case that it is only by probing and acting that understanding is developed.” (Roche and Kelly 2012: 8-9). This is in contrast to changes which may be ‘simple’ or ‘complicated’: whereby most (if not all) variables are known up-front and the relationship between cause and effect is easier to predict or uncover with a bit of consultation or reflection on past experience or lessons learnt from elsewhere.

[3] A theory of change refers to how a program assumes development change occurs in the country or sector in which they are working (regardless of what they – the program – do to affect this change). A theory of action details how the program (its activities and inputs) will contribute to these changes. See this link for further detail on ToAs and ToCs

 

One thought on “Monitoring, Evaluation and Learning for Complex Programs in Complex Contexts: Three Facility Case Studies

  1. Pingback: Links I Liked - From Poverty to Power

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s