Online networking for stir-crazy development workers in lock-down is apparently a thing now. Being led, at least in Australia, by the Research for Development Impact Network (RDI) – it’s a bit like speed dating, but without the awkward silences and musty pub smell.
The most recent event focused on Monitoring, Evaluation, Research and Learning (MERL). Specifically, what on earth does MERL look like for an adaptive program?
Here are the three big conclusions from the
dating networking event:
First. Flexibility, responsiveness and adaptation are different – and it matters that we get the language right up front, before we rush ahead with MERL.
Adaptation means picking a single problem and outcome, then relentlessly going after that by adjusting strategies, activities, budgets and tactics to find the most effective path to change. The outcome remains the same, but everything else is up for grabs as new information is learnt or the context changes.
By comparison, as Graham Teskey has eloquently defined (here and below), flexibility and responsiveness are about ramping up or down the pace of implementation (former) and responding goals to major unexpected events or government policy (the latter).
The key take away for MERL? It’s possible to be responsive and flexible without a whole lot of evidence and learning (the decision is often made by higher ups in the partner government or aid headquarters) – but you cannot be adaptive in the absence of evidence and learning.
Second. The corollary of the above. Adaptive management has a high burden on MERL systems and staff. There are four practical implications of this:
- Value learning and contextual knowledge. Adaptive management requires a heavy emphasis on the ‘L’ (learning) bit of MERL, and MERL must be treated as core to programming – MERL is something that everyone does; not a separate function undertaken off to the side by M&E wonks. Adaptive management also means putting greater value on tacit and contextual knowledge – national staff, networks, political insight; not just technical. Have also blogged about this here previously.
- Real time, quick and dirty data. Information and learning must be in as close to real time as possible – no point the key piece of data or lesson coming at mid-term review, well after the reform opportunity has passed. Adaptive managed also means being OK with good enough and quick enough information and data, not always international best practice methods and studies.
- Contest and make information actionable. Adaptive management is an art not science; it is based on judgement – so evidence needs to be contested (especially with local perspectives); MERL systems have a critical role in facilitating this contestation process. Evidence must also be communicated and presented in ways that allow decision makers to action the evidence – reams of information is never going to be useful in an adaptive program unless it is actionable. This means presenting data that allows people to choose between strategies/ options.
- Documenting and getting someone to validate what you’re doing. Adaptive programs must also document what decisions were made, when and the rationale for decisions. Without this paper trail – will be nigh impossible to remember why things were done the way they were, and to have the explanation for auditors and evaluators when they come in to assess if it’s worth continuing with this adaptive malarkey (or just to go back to the tried and tested log frame). It is also worth building in funding and time to pay for a regular semi-independent review – someone (or several people) with external legitimacy who can explain what the programs doing and why, be a critical friend to the program (is this the right strategy? Are you sure?), and communicate what the program is doing to others and why. E.g. John Sidel’s role in the CfC program
Third and finally. No matter how good your MERL system is, adaptive management in aid programs is unlikely to work unless three groups come together. This is illustrated perfectly in the below (full credit to Duncan Green here for the diagram, all blame on me for edits made to it).
Donors (on the left) need to resist managing at the input/ output level and delegate decisions about activities and budgets to their intermediaries. Central to all this is both (i) trust and honesty; donors will only “let go” unless they trust the intermediary can manage higher levels of autonomy, and (ii) having people in donor agencies willing to defend adaptive programs and stop the slippage back to a traditional planned approach (Jaime’s great concept of ‘intraprenuers’).
Intermediaries – the INGOs, managing contractors, NGOs and people wedged in the middle – need the appropriate incentives, operational and program platforms to drive adaptation in their organisation. This is different to a planned delivery approach. Everything from:
- Team composition – soft not just technical skills
- Management systems to handle greater flexibility
- Local relationships and partnerships (not principle-agent)
- Process for obtaining evidence at right time, contesting it and making judgements on it
- Setting up risk, budget management and ops to support adaptation
Last but not least, local reformers – these people need to have the power to define problems and possible solutions to them. They need to be equipped, trusted and allowed to use evidence and make their own judgement about when to stay the course, adjust and why.
That’s it. Three big lessons, and lots of development workers now happily bunkered down on Zoom with their newfound professional soul-mates.
Graham Teskey and I also have a paper in the works on how to do adaptive management – in gory detail (everything from budgeting to recruitment). So watch this space…