By Lavinia Tyrrel and Priya Chattier
Does research and evidence really make a tangible difference in Australian aid for development? Or are we just kidding ourselves that we are solving complex development challenges and making a lasting impact? And if we are, how can we make better use of research and evidence that not only maximises development outcomes, but also provides a visible return on investment for development assistance?
The promise of data coupled with the push to use evidence to inform both learning and accountability functions has led to significant expectations around MERL (Monitoring, Evaluation, Research and Learning).
Drawing on analysis we conducted as part of an (RDI/ La Trobe) action research project, this blog unpacks why aid projects are not able to consume or utilise knowledge as much as they would like to and what we can do to try and shift the status quo.
What we did and how?
We surveyed 80 program staff, across four Australian-funded aid programs that Abt Associates manages. These four programs were selected as they reflect a mix of country contexts (PNG, Indonesia, Timor-Leste, Asia regional), include both national and international staff, cover a range of delivery modalities (programs and facilities) and a range of sectors (gender, governance, community and sub-national development and human development sectors). In the survey, program staff were asked questions relating to:
- Why is research and evidence important for effective programming?
- How well do the participants judge that their programs use evidence to inform activity design, implementation or review/evaluation?
- What approaches do their programs use to integrate research into programming?
- What forms of research/evidence are of most importance to program staff?
- What are the main barriers to uptake of research in their programs?
- How can programs improve the use and uptake of research and evidence in their day-to-day work?
In addition, we interviewed staff to triangulate findings from our online survey. Full details of our method and findings can be found here.
What we found: five main barriers
Across the board, staff identified five main barriers preventing them from using research and evidence in programming.
1. Limited capacity
Many program staff reported to have limited capacity to do research and learning themselves, or to commission research in a way that would meet their program needs. For instance, some staff felt they lacked the ‘know how’ of writing a TOR to commission a piece of policy relevant or a research project that employs rigorous methodology and estimating input days, cost and scope for a research project. Some programs also lacked strong, in-house expertise in mixed-method (quantitative and qualitative) research designs or experience setting hypotheses to research and analyse complex problems – thus making them reliant on external advice which may or may not be available to them.
2. Methodological differences
There are methodological differences between academia and development practice. Often as program practitioners we make choices based on the availability, of data, recognizing that these data can sometimes be biased. This does not excuse biased data collection practices, but rather points to exigencies of “complex contexts” in which program staff operate and the reasons to utilise different kinds of research methods to inform adaptive programming.
3. Quality and methods dilemma
In a complex international development world, development practitioners often struggle with the fact that the quality of data must pass the litmus test on reliability and validity. This calls for program staff to make strategic use of their trusted networks with power brokers to influence program goals and outcomes. These factors need to be reflected in the program design and delivery so that research becomes part of intervention design that tracks and responds to changing contexts in a timely manner.
4. More time needed for commissioning, using evidence and for learning
Our program staff also felt they lacked time to commission and use evidence and research, as well as time for learning and reflection to feed evidence into programming. They simply do not have “thinking time” that is embedded in the way an aid program is designed, delivered and evaluated.
5. Ethical and political concerns
Many program staff also have concerns about who designs/funds programs and who benefits from research. While there is certainly more appetite for evidence to inform development practice than in the past, research budget is rather minimal and often first to be cut across programs. The way in which evidence is used to inform donor decisions and research tends to be donor-driven and not demand-driven from the program/country perspective.
What works: factors enabling research and evidence to be put into action
We also asked program staff to tell us what they think did work – and examined a good practice case study of Investing in Women (IW). The following six factors have enabled the IW team to overcome some of the barriers noted above.

Quarantining budget at design which carries through to annual allocations: Each of the program pathways under IW, and the MEL unit itself, have budget set aside for research and analysis and internal MEL/research staffing costs are resourced. This allocation was instigated at design and has carried through in subsequent annual budgets. |

Embedding a culture of MEL into programming: Research and analysis is only effective if it is integrated into learning and reflection processes that allow it to influence activity and budget allocations as well as facilitate a number of processes to allow program teams to directly engage with analysis and research and use this to inform implementation and review. |

Leadership on part of the donor and the managing contractor or NGO: In IW, there is political will within the program team (including the donor representative) for evidence that provides a yardstick for program performance, its success or failure. This extends beyond ‘just’ results reporting for accountability performance – and investments in understanding complex problems the program is seeking to tackle. |

Partners in the country who value evidence and research: in addition to leadership on the part of the donor and Managing Contractor, IW also has national partners (government and private sector) that value and demand evidence to help them influence business corporate policies and decision making. |

Evidence as a programming strategy: Many aid and development programs separate the commissioning and dissemination of research and evidence from programming. In some parts of IW (pathway three), Information and evidence are seen as a way to influence gender norms and attitudes amongst businesses, investors and governments. |

Making academic partnerships work for programming needs (not vice versa): IW has brought research functions in house. A research manager (based within the MEL team) works with academic institutes, local researchers and program teams themselves to commission evidence and research outputs. A key rationale for this is to bring the commissioning and production of research closer to those who intend to use it (demand led vs supply led). |
Concluding Thoughts
In concluding, while there are good practice examples, wholesale change is not possible unless we also address the underlying reasons why research has been devalued in aid and development (at least in the Australian context). While the above findings may seem relatively common sense, it is often not easy to implement these within the context of aid design and delivery amidst other political demands of operating in complex challenging contexts in which aid is delivered. However, we do hope this blog will at least spark ongoing dialogue about how best to increase the uptake of research and evidence in the aid and development community, and perhaps, even make a lasting impact that contribute to development outcomes. In a perfect world, we would like to see that big development challenges of our time are informed by well-timed and relevant/topical academic evidence. Some believe that academics who focus on rigor tend to denigrate the value of relevance to practitioners – rigorous is often not relevant and vice versa. If only academic research and evidence could be made more accessible and digestible in time for policy/program design and delivery – the world would probably be a better place. Hence, research and evidence should become an integral part of the way aid in the international development sector is delivered so that investments lead to meaningful and lasting change.
Pingback: If Randomised Control Trials (RCTs) improve global development outcomes – why then are we still fighting about them? – Governance and Development Soapbox