News & Views

A family resemblance: Outcome Harvesting and QuIP compared

The proliferation of different methods and tools for organisational monitoring, evaluation, learning and accountability (MELA) can be a source of confusion and frustration. But the opportunity to compare and contrast their parallel evolution, and their strengths and weaknesses in different contexts can also be illuminating and useful. This has certainly been the case for me over the last five years of designing, piloting and using the QuIP (Qualitative Impact Protocol). Previous blogs and papers have compared and contrasted it with Process Tracing, Contribution Analysis and Realist Evaluation. We’ve also benefitted from dialogue with those involving in developing Goal Free Evaluation, Participatory Assessment of Development (PADev) and Participatory Impact Assessment for Learning and Accountability (PIALA). Outcome Harvesting (OH) provides another interesting point of comparison: more so indeed than its name (cleverly dodging the words impact and evaluation) implies. It can be defined as “an evaluation approach that does not measure progress towards predetermined outcomes, but rather collects evidence of what has been achieved, and works backward to determine whether and how the project or intervention contributed to the change.” (UNDP, 2013, p.5)

This brief comparison draws primarily on a summary of the approach produced for the Ford Foundation in 2012 by Ricardo Wilson-Grau (the main originator of OH) and Heather Britt. A striking similarity with QuIP is the emphasis on garnering useful evidence of change and its causal drivers by working back from outcomes to activities of the commissioning organisation (referred to as the “change agent”) rather than forward from the activities that it wishes to assess. Second, they also emphasise the usefulness of this approach to assessing outcomes in complex contexts where many factors and combination of factors may lead to many outcomes (positive and negative, anticipated and unanticipated), and where relations of cause and effect are not fully understood. Third, and linked to this, OH shares with QuIP an emphasis on the usefulness of gathering credible evidence of contribution, without necessarily being able to estimate precisely how much of a given outcome can be attributed to a specified activity. Indeed implicit in both approaches is recognition that aspirations to measure change and attribute outcomes too precisely may even be an obstacle to a broader and more reliable assessment of causal processes associated with the activities being assessed.

These commonalities with QuIP, allowing with more detailed differences, can be elaborated by looking in turn at the six iterative steps of Outcome Harvesting: (1) Design, (2) Identification and drafting of outcome descriptions, (3) Engagement with change agents in finalising these, (4) Substantiating outcome descriptions through consultation with independent agents, (5) Analysing, interpreting and making sense of the evidence, (6) Engagement with potential users of the findings. See the table below.

 

Overall, this brief comparison suggests to me that the values and philosophy underpinning OH and QuIP are very similar. In aspiring to produce evidence that is credible and useful to actors in complex contexts both implicitly counsel against pursuit of universal truths and perfectionism (including spurious precision, or what Mansky calls “incredulous certitude”). Both also recognise the limitations of having to rely on the cooperation and perception of stakeholders in any change process, but also appreciate the ethical as well as practical benefits of eliciting and comparing multiple perspectives. Both distinguish between evidence of change (‘outcomes’) and evidence of drivers of change, and favour starting with the first and working back to the second.

There are also significant differences. While OH is more detailed and prescriptive than Outcome Mapping (see footnote 13 of UNDP, 2013) it is significantly broader in scope than QuIP – e.g. in addressing recurring monitoring needs alongside the need to assess impact of specific interventions. QuIP is also more narrowly focused on securing the feedback of intended beneficiaries, in a way that is more transparent and open to auditing by third parties. OH, in contrast (and like process tracing) appears more tailored to assessing individual efforts: e.g. in advocacy, campaigning and policy engagement.

Of course generalising about relative strengths and weaknesses is dangerous, note least because niceties of methodological design often turn out to being less important than how carefully and skilfully different approaches are actually implemented. More specific features of the QuIP, including blindfolding, separation of data collection and analysis, and systematic coding were introduced explicitly to mitigate potential for bias levelled at qualitative impact assessment methods. OH has less to say in this regard, and perhaps therefore less to offer, but thereby is more open. For this reason it would be more accurate to describe QuIP as a form of OH than the other way round, and as a narrower approach that could be incorporated into an OH. Overall, the key point is perhaps that they are mutually affirming approaches that belong to a broad family of more qualitative and interpretive approaches to assessing change.

This takes me back to the point with which this blog opened – that for all the confusion of terminology and acronyms there is much to be gained from the existence of a plurality of approaches to assessing change. Attempts to list, review and classify different approaches more systematically for different fields can be useful – e.g. see Befani (2017), Spaggiari (2016) and of course the BetterEvaluation website. But if we accept the benefits of practice that is attuned to diverse, complex and evolving needs then we should neither expect nor hope that any overarching review will ever be definitive.

References

Befani, B with M O’Donnell (2017) Choosing appropriate evaluation methods: a tool for assessment and selection. London, Bond. https://www.bond.org.uk/resources/evaluation-methods-tool

Spaggiari, L (2016) Guidelines on outcomes management for investors. Guidelines for integrating outcomes measurement, analysis and reporting into operations of inclusive finance investors, asset managers and other funders. European Microfinance Platform and Social Performance Task Force. https://sptf.info/images/Guidelines-on-Outcomes-Management-for-Investors.pdf

UNDP (2013). Discussion paper. Innovations in monitoring and evaluating results. November. Edited by Jennifer Colville. [Jennifer.colville@undp.org]

Wilson-Grau, R. & H. Britt (2013) Outcome harvesting. Produced for the MENA Office of the Ford Foundation. May 2012 (revised November 2013).

Comments are closed here.