Lessons learned from a year of advocacy impact evaluation

Posted by Claire Hutchings Monitoring, Evaluation and Learning, Campaigns, Kimberly Bowman Global Adviser, Gender and MEL

28th Feb 2013

Evaluation gurus Kimberly Bowman and Claire Hutchings reflect on what Oxfam has learnt in our efforts to monitor the effectiveness of our advocacy campaigns.

Those of you who regularly read this blog might already be aware of some of Oxfam's efforts to "tackle the effectiveness challenge."  Back in October, Jennie Richmond explained Oxfam's efforts to address learning and accountability through Performance Effectiveness Reviews - and Karl Hughes provided a slightly wonkier explanation of how things were going on the From Poverty to Power blog. Six months later, we are following up with some reflections on the lessons we've learned from a year and a half of advocacy impact evaluations.  (You can read more in this sister post, our contribution to the American Evaluation Association's Advocacy and Policy Evaluation blogging week.)

What we're trying to do: process tracing

Advocacy and campaigning impact evaluation is challenging - but not impossible. While "large n" project effectiveness reviews lend themselves well to statistical approaches, "small n" projects - such as policy-change campaigns - don't.

As part of Oxfam's Performance Effectiveness Reviews, we've developed an evaluation protocol based on a qualitative research methodology known as process-tracing. Each year (we're in the middle of year 2) we randomly select about eight campaigns and advocacy projects for impact assessment, which is undertaken by an external evaluator using this protocol.

We're trying to get at the question of effectiveness in two ways:

  • First, we look for evidence that can link the intervention (or campaign) with outcome-level change ("Did the change happen? Was it after the intervention? Does the evidence suggest they're connected?") 
  • Second, we develop and look at other "causal stories" of change," helping us to understand what version of "how change happened" is best supported by evidence - and if the evidence suggests that the intervention contributed to change, how significant that contribution was.

Confused yet? We're trying to use a simple 1-5 scale to summarize the findings. That said, we recognize the risk this poses for over-simplifying or distorting fairly complex stories of change.  

Four big lessons from year one

We've learned a lot from the first year of campaigning and advocacy effectiveness reviews. Here are some of the top lessons we've learned:

1. Theories of change are critical - but often hidden and they take time to unpack.
As a theory based evaluation methodology, process tracing involves understanding the theory of change underpinning the project or campaign.

In many campaigns, the theory of change (often expressed in a logic model) is rarely explicit - and can take time to pull out. In order to deliver a quality evaluation in the time to hand, we're looking at drawing out the theory of change before bringing in an evaluation team, so they can truly focus on the outcomes and causal stories.

2. There are usually many levels of outcomes to choose from - and getting the level right is tricky.
Campaigns are notoriously ambitious in their planning, unpredictable in their progress and usually very long-term in nature - all making it difficult to choose which level of outcomes to focus on.

Choose outcomes too close in time type to the intervention and the evaluation may be superfluous (and a waste of time and money!).  Choose outcomes too far down the theory of change and you run the risk that they either won't yet have materialised in a substantive way or it will be excessively challenging to to find credible evidence to link the two - at least within the evaluation period.

3. Credible evidence is a judgement call - and yes, we think positive bias is an issue.
Signatures or smoking guns are useful - they help provide near-unequivocal evidence in support of one hypothesis or causal story. When such clarity doesn't exist (most times!), different people may have different opinions on what constitutes credible evidence.

We've tried to mitigate this with the usual methods:

  • Working with professional evaluators with strong knowledge of context.
  • Encouraging triangulation of source and method (for example asking evaluators to speak to appropriate external informants such as bellweathers), which acknowledges the often-positive biases of our own monitoring information.
  • Validating findings with key project stakeholders before finalizing reports.

All that said, it's still a struggle to over come the charge of (positive) bias so often levelled at qualitative evaluation.

4. This is neither cheap nor simple - it takes time, resources and expertise to do these well.
We have to balance real resource limitations (time and money) with our desire for quality and rigour. Does this mean "doing less, better" - selecting fewer outcomes for evaluation, or undertaking fewer evaluations in the same amount of time? As with all credible impact evaluations, we need to invest adequate time and expertise to do these well. Like most evaluations (and campaigns, for that matter!) we can also expect many practical implementation challenges!

As we near the end of our second year working with this protocol, we're looking to continue to reflect, review and refine. We welcome your contributions to help us strength our approach. Use the comments below, read through the comments in our recent aea365 blog post, or email us directly.

Read more

Read our Project effectiveness reviews.

Blog post written by Claire Hutchings

Monitoring, Evaluation and Learning, Campaigns

More by Claire Hutchings

Claire Hutchings

Blog post written by Kimberly Bowman

Global Adviser, Gender and MEL

More by Kimberly Bowman

Kimberly Bowman