Cookies on oxfam

We use cookies to ensure that you have the best experience on our website. If you continue browsing, we’ll assume that you are happy to receive all our cookies. You can change your cookie settings at any time. Find out more Accept

Who told you that? Inclusion, bias and humanitarian evidence synthesis

Posted by Dr Ellie Ott Humanitarian Evidence Programme and Communications Manager

4th May 2016

Malakal IDP camp, South Sudan, August 2014. Credit: Simon Rawles/Oxfam

OPINION:

Dr Ellie Ott, Humanitarian Evidence Programme and Communications Manager, reflects on the challenges and opportunities of evidence synthesis in the humanitarian sector.

The Humanitarian Evidence Programme, a UK Aid-funded partnership between Oxfam and Feinstein International Center at Tufts University, aims to strengthen the evidence base of humanitarian policy and practice through synthesis and research uptake. To this aim the programme recently commissioned eight evidence syntheses looking at key humanitarian issues(1). As the process is nearing completion we organised an event, hosted by the UK Department for International Development, to discuss the challenges and opportunities which the process has shed light on.

When you bring together literature, what evidence is being included? And whose voices and issues are being left out?One major question that emerged was: when you bring together literature, what evidence is being included? And whose voices and issues are being left out?

The evidences syntheses commissioned follow systematic review principles, eight syntheses are underway, and six have published protocols. These 'protocols' thoroughly outline why the questions are important and how the full reviews will proceed in a transparent way that tries to minimize biases. This systematic review design helps form conclusions with greater confidence, it includes doing a thorough search of the literature, having pre-determined and justified inclusion criteria, and documenting all decisions.

As Mukdarut Bangpan of the EPPI-Centre pointed out, systematic reviews have the added benefit of including evidence from more than one study to understand a more complete picture as well as what is context-specific. Even when looking at traditional literature reviews that combine multiple studies, I am left wondering: how was the literature chosen and how were conclusions made? Was the literature found through friends, mentors, and 'big names'? Are authors selecting studies and results in studies to match their assumptions? With the protocol and documentation, systematic reviews enable anyone to know what was planned and why decisions were made. Review teams found it helpful to undertake this process in part to challenge their own biases. Thus, in its nature, this process removes some of the politics behind whose evidence is included.

There is politics in research and in defining terms that can prioritise the questions of funders and researchers over those of crisis-affected populations
Still, rigorous evidence syntheses are human processes, and the literature that exists is driven by what research is undertaken and reported. Katharine Williamson of Save the Children UK discussed how the review on the protection of unaccompanied and separated children has found an emphasis in the literature of AIDS-affected children, likely driven by funding for AIDS-related research. Most of our review teams decided to exclude studies from high-income countries, as these contexts are often vastly different than those of most large humanitarian responses in low and middle-income countries (LMICs), and the volume of literature from high-income countries would overwhelm those from LMICs. But even in LMICs countries, who determined what review question would be asked and how it would be asked? There is politics in research and in defining terms that can prioritise the questions of funders and researchers over those of crisis-affected populations.

For their own evidence syntheses, the teams also had to decide exactly what review question to ask and what information studies to provide in order to be included and synthesised. The shelter team did an initial scoping report and settled on the topic of shelter self-recovery. Victoria Maynard from University College London and Habitat for Humanity GB discussed how they ended up asking: where is there interest but no evidence, such that a synthesis would not be useful? Where is there evidence but little interest, such that a synthesis would not get used? And, to decide on the synthesis question, where is there evidence AND interest?

There are other important questions for the field that call for generating interest in the topic or undertaking more primary research. Still, they found that even when the evidence is available, the sparse methodology sections (if they exist) of primary studies can make it hard for findings to be usable. This is especially the case with practitioner evaluations: whom did they interview? How many? What did they ask? How did they find them? If authors don't include this information, the review team excluded the study as you cannot tell the quality of the evaluation or the basis for the evaluation's findings.

Even with the limitations of systematic reviews and what evidence existed and could be counted in the review, there was an overwhelming message from the panel on the importance of doing this process. One panellist remarked that there can be a fatalism about whether there's enough evidence in the humanitarian field or whether systematic reviewing is a good tool, but we need to pause and appreciate what the sector does know. This process can serve to challenge the humanitarian field as to what evidence collection is funded and the quality of evaluations being done.

It helps us pause and realise, as in the ALNAP report Insufficient evidence? The quality and use of evidence in the humanitarian sector that 'the failure to generate and use evidence in policy and response makes humanitarian action less effective, less ethical and less accountable.' Personally, it helps me to pause and be reminded that we need to go beyond good intentions to strive to bring together evidence in a more objective and transparent process, challenge our assumptions, and in doing so help populations affected by humanitarian crises get back on their feet.

(1) The humanitarian evidence syntheses are on the topics of shelter self-recovery, child protection, mental health and psychosocial support, identification of affected populations in urban settings, child malnutrition; water, sanitation and hygiene (WASH) in disease outbreaks; pastoralist livelihoods, and market support interventions. 

Read more

Photo: Malakal IDP camp, South Sudan, August 2014. Credit: Simon Rawles/Oxfam

Blog post written by Dr Ellie Ott

Humanitarian Evidence Programme and Communications Manager

More by Dr Ellie Ott

Eleanor ‘Ellie’ Ott