What constitutes ‘sufficient evidence’ for aid program and policy makers to take decisions – about budgets, activities and so on – in aid and development? Have been talking about this with academics, officials, NGOs, project managers and donors for a number of years now. Ever since working with The Asia Foundation (TAF) where the question first came to a head (while we were “noodling around”[1] with politically-informed approaches to programming). Here are some thoughts:
- What do we mean by evidence? In aid, those making decisions often see particular forms of knowledge as more ‘credible’, ‘unbiased’ or ‘scientific’. Whether we do it consciously or not, we (and I’ve been guilty of this) are more likely to gravitate towards the document headed “randomized control trial” rather than “mythology and community histories”. Risk here is twofold. Not only do we overlook tacit forms of knowledge that can improve how we design and implement aid programs, but we inadvertently disempower the very views of those (local researchers, change makers, officials, national project staff) that we want front and centre of problem solving, design, and program adaptation and review. [And if we’d done this at TAF, we never would been able to use the political epiphany one particularly savvy staff member heard in a men’s bathroom – and which spun their project on different course…]
- Who is able to produce it? A narrow definition of ‘sufficient evidence’ (relying on levels of methodological rigour only possible at top-tier research institutes) implies that program managers are mere ‘do-ers’; not ‘thinkers’ capable of producing and integrating evidence into daily decisions. And if we have learnt anything from the Monitoring, Evaluation and Learning side of aid – it is that knowledge production and uptake must be a core to project management.
- Who is the evidence for? Information is power. Those higher up in the aid world (politicians, senior bureaucrats) often want simple, unambiguous and aggregated data on results and performance. To prove how public funds are being spent overseas or to validate a policy position or world view (“policy-based evidence versus evidence-based policy”[2] anyone?). While this can have an important place for accountability purposes, it is not want most practitioners want. They want evidence that reflects on complexity: why things are happening in this location and not others; are our assumptions holding true, why / why not? Linda Kelly (Praxis/ La Trobe), Lucy Moore (PNG Governance Partnership) and I will be reflecting more on this this next week at the AAC 2020.
- When do we get it? Timeliness is also an issue. Most projects operate on a three year timeframe with few opportunities to make substantive adjusts the program (in terms of activities, budget, outcomes) during implementation. So getting evidence and information to the right people at the right time is important. Primary research from a third party academic institute is not known for quick turnaround (and this is often for good reason) – but tacit knowledge on the other hand is readily available, easy to collect and share, and a way to fill the timeliness gap.
What can we do about it?
There are lots of practical things we can do to shift the pendulum towards valuing more tacit, locally-led and relationship-based forms of knowledge, alongside what we already know and love – here’s three for starters:
- Decolonising the research agenda – co-designing research topics and analytic agendas, questions and methods – and undertaking research and analysis – with policy makers, local actors and program managers. Those who will use it should ultimately be part of its inception and implementation.
- Giving a name and credibility to different forms of knowledge – to elevate the many ways in which practitioners already use evidence (e.g. literature reviews, community histories and perceptions, political insights from a counterpart in government, secondary analysis of surveys) to a more equal footing with others.
- Requiring and resourcing learning and reflection as a core part of all program management, and linking learning to changes in activities and budgets – to ensure that multiple forms of evidence (be it from field trips, an RCT, knowledge from relationships or a community case study) are fed directly into design, implementation and review.
Some final words of caution?
But a broader agenda and definition for evidence is not without its risks. Knowledge is power. Information can be used and abused. It would be remiss for this blog to be interpreted as a licence to subvert and misrepresent information for political or personal gain. Checks and balances are needed, such as third party review or regular review and reflection sessions to contest information during implementation (rarely will two people looking at the same piece of information interpret it the same). Evidence also needs to be triangulated. E.g. an analysis of public sentiment towards government from social media should be tested against what the literature or a recent country wide survey on the issue says. This is something we don’t always have the time or incentive to do in development (beyond advocating for it in MEL frameworks): yet is critical if we are to show that we value and hold more ‘scientific’ and ‘tacit’ forms of knowledge to the same standards.
[1] One of Jaime’s favorite catch phrases.
[2] We cannot claim this great quote. Was from a LTU staff member at the recent RDI/LTU action research project meeting.
Pingback: Links I Liked - From Poverty to Power
Pingback: ‘Speed dating’ for MERL and adaptive management wonks – Governance and Development Soapbox