«“More of an art than a science”: Challenges and solutions in monitoring and evaluating advocacy Sarah Rose, February 2014 Introduction How to ...»
M&E Paper 8
“More of an art than a science”: Challenges and solutions in
monitoring and evaluating advocacy
Sarah Rose, February 2014
How to monitor and evaluate advocacy work as part of development interventions is a significant
challenge faced by many advocates. So what are some of the possible solutions?
Building on a series of papers, conferences, training and learning from INTRAC consultancy
work, this paper aims to share and learn from INTRAC’s most recent monitoring and evaluation (M&E) workshop held in 2013. At this event, a group of advocacy and M&E professionals came together to discuss some of the challenges they face and share possible solutions from their organisations. In particular, we draw on four case studies presented at this workshop.
This paper offers eight key points that organisations should consider when designing an advocacy M&E system, as well as an annotated list of resources and reading materials. INTRAC uses these as source materials for training and consultancy work.
We do not go into definitions of advocacy or what advocacy might include, as this has been covered in other INTRAC papers1. For information on these areas see the reading list below.
What is the significance of this problem?
There is wide consensus that organisations are increasingly incorporating advocacy and changerelated strategies into their theories of change, recognising that pure service delivery can only get so far in tackling poverty. Donors are willing to fund this work, but with the move towards greater accountability, they are increasing the demands on organisations to show effectiveness.
Many organisations are struggling with the juxtaposition of trying to adapt traditional MEL (Monitoring, Evaluating and Learning) systems for the purposes of advocacy and encouraging busy campaigners to stop and reflect on advocacy work.
Over the last few years there have been a number of publications on this topic, including INTRAC’s paper entitled ‘Tracking progress in advocacy: Why and how to monitor and evaluate advocacy projects and programmes.’ All have made the point that assessing the impact of advocacy is notoriously difficult due to the many actors involved and changing landscapes.
Many of the same challenges are being raised again and again, and while there is more recognition that donor expectations are increasing, there aren’t many solutions.
Perhaps one of the reasons we are still struggling is because we are expecting a universal tool that can be adapted to different audiences. There is no such tool. The reality is that evaluating advocacy is hard. There is no magic bullet and systems are dependent on context and Please see INTRAC’s paper ‘Tracking progress in advocacy: Why and how to monitor and evaluate advocacy projects and programmes’ by Maureen O’Flynn, October 2009.
organisational type. It is more of an art than a science and advocates need to approach it logically and with a good understanding of their organisations’ capacities and willingness.
What are the continuing challenges?
At INTRAC’s 2013 M&E workshop, participants from a wide range of organisations including Save the Children, The Waterloo Foundation, Amnesty International, Oxfam, Bond, BBC Media Action and Norwegian Church Aid discussed the main challenges they continue to face and came up
with the following list:
How are others dealing with the problem?
As part of the conference, four organisations presented examples of how they are responding to some of these challenges. Detailed case studies can be found on the monitoring and evaluation page of the INTRAC website (http://www.intrac.org/pages/en/monitoring-evaluation.html). In
summary, here are some of the ways these organisations are dealing with the challenges:
Save the Children International has embedded an advocacy monitoring tool within its annual reporting system for country offices and members. It allows the organisation to collect a broad range of data from numerous countries, including the type of advocacy staff have conducted and the results they have achieved. This makes it an effective knowledge management tool.
Global Witness is an international campaigning organisation which has developed a planning, monitoring, evaluation and learning (PMEL) system that focuses on clarity in campaign logic and the impact of campaigns. The initial focus was on an organisation-wide planning week where campaigners presented their campaign plans and received feedback from their peers and senior leaders. The impact monitoring element of the system has been in place for a year and includes regular impact reporting and campaign impact logs, which record progress against specific indicators of change in real time. The strong focus on outcomes is critical as campaign plans often have to change and adapt to the dynamic external context. The system allows campaigners to explain the value and relevance of their activities and to tell the story of change. The system continues to be tweaked in order to respond to internal and external feedback and is tailored to the organisation rather than applying a best practice blueprint.
Amnesty International uses a theory of change approach to assess its human rights advocacy.
The organisation has identified four interrelated dimensions to outline the broad areas of change it expects from its advocacy work. These dimensions are: changes in people’s lives, changes in policy, changes in accountability, and changes in activism and mobilisation. For each dimension, Amnesty adapted a theory of change to determine how it might see change happen and what indicators it might use. The organisation also identified 10 meta-indicators relating to outcomes in three stakeholder groups: people whose rights are being violated or are at risk of being violated, target decision makers, and key channels of influence.
The organisation has simplified M&E jargon by asking questions such as ‘What is the change that you want to see?’ and ‘What needs to happen to ensure this change?’ Project teams use a webbased project database to input their expected outcomes, strategies to achieve the outcomes, and indicators that have been developed. The teams are required to report using this database every six months. The organisation then selects certain campaigns for a more focused analysis.
Climate and Development Knowledge Network (CDKN) has applied the principles and ideas of outcome mapping and married this with dimensions of change to report on the effectiveness of climate negotiation support. It identified five dimensions of change related to CDKN's theory of change for this support and used these as proxies of outcome challenges. CDKN then set progress markers at ‘expect to see’, ‘like to see’ and ‘love to see’ levels against each dimension, and assessed change during and after climate talks, recognising that markers may change. It has triangulated evidence from a variety of informal and formal sources to determine whether change has been observed against each progress marker. Since the conference CDKN has published a paper on this approach.2
What is the range of possible solutions?
An overarching theme coming out of the 2013 workshop was that organisations need to be honest about why they are striving for a solution to these problems: is it to improve and learn or is ‘Supporting international climate change negotiators: A monitoring and evaluation framework’ (Working paper) – Climate and Development Knowledge Network, November 2013.
it to be accountable? If organisations are clear on these questions, designing M&E systems
becomes much easier. A handful of practical solutions include the following:
Organisational culture and learning Critical appraisal The success of an advocacy MEL system depends on the value staff place on it, and getting it right means being realistic about an organisation’s type and culture. In the case study from Global Witness it was clear that the organisation was made up of busy activists. This meant that asking staff to fill out time-consuming spreadsheets was not going to work. Instead, the PMEL system was designed around what staff do best and defending what they believe in by focusing learning around a planning week when campaigners get to present their logic and encourage critical appraisal.
In established advocacy-focused organisations like Global Witness, it may be difficult to get staff to write things down. However they do not need convincing that advocacy work is valuable. In traditional development-focused international non-governmental organisations (INGOs) that have previously focused on service delivery, there may be a need to place more emphasis on justifying advocacy work and reporting on success. These organisations may not be ready for the critical appraisal mentioned above.
Small, incremental changes
If you are designing an advocacy M&E system, start by looking at the culture of your organisation and how you can make small, incremental changes. One simple step would be getting the person in charge of M&E to attend advocacy planning meetings. Another would be to start each advocacy planning cycle by looking back at previous projects.
Begin by capitalising on a campaigner’s natural tendency to adjust and change strategies based on instinct and find ways of recording these changes. Utilise the rhetoric that good advocacy includes good M&E in real time, and build in incentives to help change where there may not be a culture of reporting. For example this could include resource allocation, reflection meetings, and praise for learning and adapting strategies based on effective monitoring.
Think carefully about the jargon you use. Campaigners are activists and use the language of power, change, and rights. As such, the language of M&E does not always mean much. Rather than using terms like outputs and indicators, start talking about the change you want to see and the proof that that change is happening. This simple alteration to reporting formats could make a big difference to the quality of information you receive.
Design reporting structures from the bottom up
Lastly, consider working with staff and partners to design reporting structures from the bottom up.
By asking stakeholders what they want or can report on and explaining donor requirements, you may develop new ways of producing robust analysis rather than creating additional systems that burden already busy staff.
Campaign logic and setting indicators The majority of campaigning and advocacy work is designed using instinct and assumption;
campaigners have a gut feeling about what may work and what issues to tackle. This makes them good campaigners and also makes M&E difficult but not impossible.
In the case studies from Amnesty International and CDKN, both organisations started by setting out a theory of change, identifying the possible dimensions of change they may see and then looking at possible progress markers.
Mapping A possible starting point to an M&E system is to encourage campaigners to think through the logic of how they see change happening. During the strategy planning phase of a campaign, ask your staff to visually map out the various pathways that change could take and challenge assumptions at each point. The LFA (Learning for Action) Group (referenced in the reading list) suggests mapping out an advocacy roadmap, which could be a helpful tool for this process.
By mapping out the various pathways change can take during the planning phase, staff are also able to develop possible indicators for change. This is preferable to developing an M&E system as the last point in the planning phase. At this point, staff are tired and tempted to focus mainly on output indicators.
This theory of change approach also makes the system more flexible and it can be adapted and changed to individual contexts and purposes. It can also be translated into logframes but it does need to be repeated each time planning is undertaken.
Assessing your contributions and capturing intelligence The challenge of attribution continues to be raised regularly and draws attention away from designing effective M&E systems. The reality of advocacy work is that no organisation works in a vacuum and it is unrealistic to try to prove attribution, particularly in the short time frames that many organisations work within. It is more helpful to focus on providing evidence for assumptions.
If you feel you have made a significant contribution to change, say so – but be prepared to back it up with credible evidence and be prepared to be challenged on your assumptions.
One way to provide credible evidence is to triangulate data, which means using multiple sources of information to lead to the same conclusion. The case study from CDKN mentions how the organisation collects from a variety of formal and informal sources to determine whether changes have been observed against progress markers.
Encourage advocates to keep ‘scrapbooks’ that note both the formal evidence, such as parliamentary records or media articles, and the informal evidence such as anecdotes or gossip from meetings. Try to encourage inputs from people who are neither targets nor allies but experts.
Transparency International UK invites external professionals to their internal campaign reviews to challenge their assumptions and review evidence.
You can try applying contribution analysis and process tracing methodologies retrospectively.
Process tracing aims to shortlist a series of evidenced explanations for an outcome. It aims to rule out competing explanations and estimates the level of influence a range of explanations may have. Oxfam has a thorough overview document of this approach, referenced below.