Adaptive Management – what’s in a word?
It’s striking how important words are.
USAID calls it Adaptive Management, DFAT calls it Thinking and Working Politically, DFID calls it Politically Informed Programming, and the World Bank just ignores it altogether.
More seriously – what is at issue here? At heart, I would argue that this agenda – TWP, DDD or PDIA – means four things:
- being much more thoughtful and analytical at the selection stage (thinking about what is both technically appropriate and what is politically feasible);
- being more rigorous about our theories of change (how we judge change actually happens in the sphere in question) and our theories of action (how and why the interventions we propose will make a difference);
- our ability to work flexibly (meaning to respond to changing policy priorities and contexts, and by adapting implementation as we go – changing course, speeding up or slowing down, adding or dropping inputs and activities, changing sequencing etc.);
- and our willingness and ability actively to intervene alongside, and support, social groups and coalitions advocating reform for the public good.
It is third characteristic that this blog is about. Working flexibly.
It is the flexibility of TWP-informed programming that usually attracts most attention. Many words are used interchangeably and uncritically: flexible, responsive, adaptive, agile, nimble etc. As noted above TWP emphasises responsiveness and adaptation. In the programs that I have worked on recently, it is adaptation that poses the major challenges to TWP: the ability to change course as implementation proceeds.
In discussions, the simple answer often given is that we remain wedded to the project framework and the annual plan and budget because, well, that is what we do and that is what the donor wants, and it’s important not to miss our spending target or drift off plan. I think this answer is clear, simple and wrong. Let me try to explain why.
When looking at issues of organisational change, public service reform and ‘capacity development’ it is now commonplace to structure the analysis in terms of three ‘layers’ or ‘levels”: the individual, the organisational and the institutional. We have known this for twenty years or so. At the individual level staff need to be trained and skilled, and with appropriate tools, to do the job. At the organisational levels appropriate systems, structures and procedures need to be in place. And at the institutional level ‘the rules of the game’ have to incentivise a culture of performance and results. In most evaluations of public service reform or case studies of organisational change, we have found that it is one thing to help strengthen individual competencies and improve organisational structures and systems, but it is another thing altogether adequately to address the institutions that incentivise performance. We have learned that turning individual competence into organisational capacity requires institutional change.
But in implementing TWP I think this is turned on its head: currently there are many incentives in place for TWP. Donors say they want it – implementing partners certainly want to. But the constraints are at the organisational and individual level. The reason is that adaptation in program delivery requires four functions to be delivered simultaneously:
- implementation: the day-to-day, week-to-week task of delivering activities (how are we doing on physical progress?);
- monitoring: the regular and frequent checking of progress towards achieving outputs (are we on track against the plan, the budget – and most importantly – against outputs and possibly outcomes?);
- learning: our internal and reflexive questioning of progress – what are we learning about translating inputs and activities into outputs and outcomes (what is working and what isn’t?); and
- adapting: revising our implementation plan, adding unforeseen activities and dropping others, changing the balance of inputs, be they cash, people or events etc. (how are we changing the plan?).
Only if we ‘learn as we go’ can we adapt in real time: this requires delivery (implementation), data collection (monitoring), learning (reflection) and adapting (changing) to be undertaken simultaneously, not sequentially. And it is here I believe that we run into severe constraints at the organisational and individual levels.
At the organisational level, the development industry has got into the bad habit of bracketing the two very different tasks of monitoring and evaluation: development practitioners are programed to say “monitoringandevaluation” all in one word, as if the two are actually one. Only recently has L (learning) been added – but added to ‘MandE’ to form MEL. The result in project management is that the responsibility for monitoring is handed over to structurally separate functional units far removed from operational delivery and implementation. Staff responsible for delivery say “monitoring is nothing to do with me”. And of course MandE staff tend to evaluate ex post, rather than in real-time.
At the individual level it is hard to imagine implementation staff with the skills and competencies (let alone the time) to undertake the four functions noted above. The skills required for efficient and effective implementation against a plan and a budget are not the same as the skills for assessing progress, analysing what has worked and why, and having the experience and judgement to know which parts of the plan need adapting and in what direction – all in real-time.
The answer seems pretty straightforward: break up MEL by allocating monitoring and learning responsibilities to implementation teams; increase their resourcing by building implementations teams with multiple skill sets and competencies; insist on regular and robust internal review and reflection exercises (at least monthly); clarify precisely the level of delegated responsibility and authority for adaptation to be given to implementation teams; and negotiate all this with the donor and the partner government.
Or is this answer clear, simple and wrong too?