How do you measure impact?
I have spent 35 years creating and running three charitable organisations. There’s always one constant question: Are we making a difference and how can we demonstrate our impact? You would have thought after all that time and with Hubbub twice winning a Charity of the Year award, I would have the answer. I don’t.
Measuring impact is incredibly difficult as it is virtually impossible to predict where change will occur. This was brought home to me when we ran an environmental behaviour change campaign for one large corporation aiming to reduce waste in the office. Unknown to us, one of the attendees oversaw the legal contracts for suppliers. The training stimulated them to review all contracts and embed more sustainability requirements into agreements. A massive win, but totally unexpected.
Understandably funders and clients don’t take too kindly to being told that we can’t predict the exact impact of our activities, or that they should instead trust our instinctive belief that there is a good chance of success. They want to see evidence and have a sense of order. There are usually two fall-back positions. The first is to request a Theory of Change model and the second is to have an independent assessor – preferably an academic institution. I have developed a healthy cynicism for both.
I completely understand the idea behind Theory of Change models and they work well for many projects where there are clearly stated objectives and intended outcomes. However, the approach struggles when the project is seeking to deliver complex systemic change where there are a variety of inter-related strands of activity. In these situations, standardised templates do not allow for the messy and evolving world on which they are trying to place order. The desire of these models to capture robust data runs the risk that the research tail ends up wagging the project dog. I have seen this in practice, where data capture becomes prohibitively expensive, quashes innovation, builds suspicion amongst participants and at worst distorts the delivery process. Even with the most meticulously created process, unforeseen external events can devalue the data that has been collected.
Partnering with academic institutions also poses challenges. The requirements of these institutions to deliver evidence that is academically robust and able to withstand peer review seemingly makes them an ideal evaluation partner. The realities are more complex. Delivering impact in the real world often involves rapid small-scale testing, resulting in fleet-footed changes of direction. This doesn’t always sit well with the academic process, which often requires a lengthy review of background evidence and a formalised process for capturing data driven by academic requirements rather than by realities on the ground. Adding in the tendency of academic bodies to include a significant overhead cost in their pricing also makes them less attractive and less accessible to smaller, grassroots organisations.
What then is the answer? Belatedly in my career, I have found an approach that seems to be working through the on-going evaluation of Enrich the Earth. There are five things that have made this a different process.
The approach of the main funder, the Esmée Fairbairn Foundation, has been of fundamental importance. Unlike many grant givers, they recognise that delivering impact requires an evolving process and it is not possible to predetermine the most effective solutions. Instead, they have been meticulous in their insistence on three measurable outcomes and transparent in their requirement to learn from things that haven’t gone to plan as well as successes. This razor-sharp focus alongside a desire for transparency has given a template from which it is hard to deviate but with the confidence to be innovative and experimental.
We picked evaluation consultants who were willing to work collaboratively with us at every stage of the project, from concept development through to delivery. They didn’t have a predetermined theory of change model but used their experience and knowledge to help us create the appropriate methodologies for collecting the data we require.
Regular meetings with the evaluation team have ensured that we remain focussed on the three outcomes and can’t drift into interesting but slightly irrelevant areas – an ongoing risk when you have an inquisitive and passionate team.
They have held our feet to the fire when things have been quietly dropped or new approaches have been introduced. They have asked us to explain why the changes have been made and provide evidence, which is being captured in a learning log. This will prove invaluable when we come to assess why the project may have deviated from the original delivery model.
They have been highly sensitive to the limitations of a tight budget and a requirement to ensure that data capture doesn’t distort the way the project is delivered or undermine trust with participants.
The project delivery is still on-going and the final evaluation report won’t be completed until the late summer so there could well be bumps in the road over the coming months. But, for the first time, I believe we have a process that is truly adding value and will result in an honest assessment of our impact.