“That’s not my job!” Responsibility of getting evaluation results into use

  • By Fran Walker, ODI
  • 06/11/2015

Ruth Nguli looks at her crop of heat and drought-tolerant beans in Makueni County, Kenya. Developing and sharing climate-smart farming techniques can help communities adapt to climate changes (Photo: Cecilia Schubert/CCAFS, Creative Commons via Flickr)

Share

Getting evaluation evidence into use by decision makers isn’t the responsibility of those doing the evaluations.  Or so was the view of several practitioners at last week’s Global Assembly on evaluating sustainable development.

As someone working in knowledge management I found this concerning.  If evaluators aren’t taking responsibility for the use of their products then who is? By the end of second day of the 3-day conference I was feeling pessimistic about the ability of evaluations to contribute to the changes needed to achieve a safe and sustainable future. But a lively debate on the final day restored my confidence in the evaluation community’s commitment to ensuring their work makes a difference and offered some possible starting points for transforming the world of evaluation.

In the opening session Rob van den Berg, president of IDEAS (the organising body), stressed the need to move beyond testing of ‘what works here and now’ to ‘what will work there and then’. This is a fascinating challenge as it will require a big shift in the focus and audience of evaluation evidence, which will mean significant changes in the way evidence is gathered, presented, and used.

I was most interested in understanding how the results of the evaluations presented at the conference were being used to improve programmes and policies, but the presentations rarely went this far, and comments from participants began to explain why. In one session there was a general consensus that an evaluator’s role is to offer recommendations, but the responsibility of implementing these recommendations must lie with the programme, donor, or policy maker. Some important points were made about independence of evaluations and having clearly defined limits, but for people concerned with the impact of development work this simplistic conclusion made me a little uncomfortable.

Fortunately this wasn’t the only view of participants, and a well-chaired session by Debazou Yantio of the African Development Bank brought in diverse views on the role of evaluation in promoting evidence-based policy making.

Improving quality

One of the key challenges raised was the quality of evaluations. Participants suggested that evaluation findings are often disregarded because the low quality of outputs. Many aspects of this challenge were discussed, including the lack of professionalisation of the discipline. As it stands anyone can call themselves an evaluator, and a CV full of experience does little to guarantee the rigour and usability of outputs. The growing number of master’s programmes and professional courses may improve this, but this is not the only factor affecting the quality of evaluations.

Evaluators often work as independent consultants, without the quality assurance systems of an organisation, and when evaluation teams consist of individuals who have not worked together before the added pressure of accommodating different working styles can also reduce the quality of work.

Influencing policy

Another challenge is how to meet the needs of policy-makers. Evaluations are not commissioned by the individuals making policy decisions, and rarely are they considered to be the primary audience for evaluation reports. Participants broadly agreed that this means that evaluations are not delivering the types of evidence needed to influence policy.

One of the presenters raised the importance of showing policy makers the demand for change. He proposed that increased involvement of citizens in the process could influence uptake -- the extent to which policy-makers consider evaluation results, by demonstrating that the public (voters) support  recommendations being made.

Parallels with research were also discussed; one suggestion was to learn from the design process of research projects to increase the credibility of evaluations, in particular by starting evaluations with a review of existing evidence and building on this evidence to improve the integrity of the results and recommendations. 

Related to this was the issue of responsibility for uptake. Evaluations are usually short pieces of work, which do not allow for the long term processes needed for policy influence. These types of evaluations leave little room for anything more than a light touch review and a technical report, placing the responsibility for uptake of recommendations solely with the recipient of the report.

Donors are now commissioning multi-year evaluations, which provides the opportunity for new approaches and new types of processes and products which consider multiple audiences.

Participants in the Global Assembly session discussed the need for communication and uptake of evidence to be built into the design, with a thorough review of knowledge gaps and a clear identification of audiences and methods of influence. It was clear that those in the room felt they had an important role to play in this process.

Beyond the conference, evaluators are also thinking about how their work can be used to inform decisions. The ‘evaluations that make a difference’ project goes even further than this and showcases great examples of evaluation findings leading to positive impacts on people’s lives.

My biggest takeaway from the session was the potential for collaboration and knowledge management to help overcome some of the challenges identified.

Whilst some felt that policy makers will never want to use evaluation evidence and that they should not be a considered a target audience others believed that evaluations can influence policy but will not be the only source of evidence.

Alignment and partnership with other influencing groups could add weight to evaluation results and support policy uptake. There is also a big opportunity to improve the communications and dissemination requirements when commissioning evaluations, supporting even short-term evaluations to take responsibility for getting their evidence in to use. Changes like these could make a big difference to the uptake and impact of evaluation work, but will come with new challenges, particularly around the independence and integrity of findings.  

 

 

We welcome comments that advance the story through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can flag it to our editors by using the report abuse links. Views expressed in the comments do not represent those of Braced or its partners.

Video

From camel to cup

From Camel to Cup' explores the importance of camels and camel milk in drought ridden regions, and the under-reported medicinal and vital health benefits of camel milk

Blogs

As climate risks rise, insurance needed to protect development

Less than 5 percent of disaster losses are covered by insurance in poorer countries, versus 50 percent in rich nations


Disasters happen to real people – and it's complicated

Age, gender, ethnicity, sexual orientation and many more factors must be considered if people are to become resilient to climate extremes


NGOs are shaking up climate services in Africa. Should we be worried?

A concern is around the long-term viability of hard-fought development gains


The paradox of water development in Kenya's drylands

In Kenya's Wajir county, the emphasis on water development is happening at the expense of good water governance


Latest Photos

Tweets

Update cookies preferences