QuIP and other approaches

A wide variety of other approaches to impact evaluation are in use, including qualitative, quantitative, participatory and mixed methods (e.g. see www.betterevaluation.org). The QuIP draws particularly on qualitative approaches, in the sense that it deals primarily with words rather than numbers, derived from open narrative text rather than responses to closed questions. Rather than drawing on its own distinctive body of theory it is also the product of a pragmatic, eclectic and iterative learning-by-doing approach to methodological development that borrows from several other approaches, discussed below.

Realist Enquiry

With its rallying cry of “what works for whom in what circumstances” (Pawson, 2013:15) there are many obvious points of affinity between the QuIP and Realist Evaluation (RE). At a philosophical level it also occupies an intermediate position between aspiring to contribute to the universal truths of positivist science and a constructivist denial of establishing any reality independently of the beholder. Truth is out there, but hidden behind perceptions. Our always imperfect attempt to groping towards it entails protracted confrontation of theory with multiple and often inconsistent sources of evidence, kept honest by openness and “organised distrust” (Pawson, 2013:18). This reflects the complexity of the world, which Pawson (2013: 33) depicts as encompassing variation in volitions, implementation, context, time, outcomes, rivalry and emergence (“VICTORE”). Managing this is only possible with the help of explanatory theory. This includes the theories of change that inform adaptation of QuIP field instruments and development of a sampling strategy at the design stage. It is also relevant to inductive data coding, analysis and interpretation. In contrast the emphasis with QuIP on blindfolding appears to depart from the more transparent process of reciprocal comparison of theories that inform at least some traditions of realist interviewing (Manzano, 2016).

The QuIP’s openness to identifying multiple and distinct pathways linking X and Z to Y also fits well with RE’s stress on distinguishing multiple and distinct CMO (context, mechanism, outcome) configurations, where X and Z can be equated with Contexts, Y can be linked to Outcomes, and the central evaluative task is to unmask the cognitive Mechanisms (in the heads of respondents) that link the two together. The potential for QuIP to be used as part of a mixed method approach also resonates with RE. Pawson (2008:19) suggests that “as a first approximation… mining mechanisms requires qualitative evidence, observing outcomes requires quantitative [data] and canvassing contexts requires comparative and sometimes historical data.” (p.19). Indeed one response to this is to classify QuIP as a “mechanism miner” that should always be part of a mixed evaluation strategy.

Feasibility and cost-effectiveness have also been important design criteria, as has been the ethical commitment to give effective voice to the concerns of the primary intended beneficiaries of development activities. However, it departs from many participatory approaches to evaluation in aiming primarily to generate evidence that is credible and useful to people not closely involved ‘on the ground’ in the activities being assessed. To date the QuIP has also not involved respondents directly in analysis and interpretation of the data as a mechanism for promoting empowerment (in contrast to other methods, including Sensemaking, Most Significant Change and PaDev, for example). This is something, however, a component that could be expanded in future (Copestake et al., 2016).

Contribution Analysis

The QuIP has a strong affinity to Contribution Analysis (CA) as described by Mayne (2012), as illustrated by the table below. Mayne (2012:273) also distinguishes between attribution (“… used to identify both with finding the cause of an effect and with estimating quantitatively how much of the effect is due to the intervention”) and with contribution, that asks whether “… in light of the multiple factors influencing a result, has the intervention made a noticeable difference to an observed result and in what way?” Taking “observed results” to refer to changes measured through routine monitoring, the QuIP conforms to this definition of contribution. But as the basis for identification of causal chains it also conforms to the first part of Mayne’s definition of attribution. Indeed, as an input into systems modelling and simulation it can also support some quantitative estimates of impact. By systematically reviewing evidence against project goals and theory the QuIP resonates with CA in aiming to serve a “confirmatory” purpose. But by asking blindfolded and relatively goal-free questions it also aims to serve as a more open-ended or “exploratory” reality check (Copestake, 2014). See more in this comparative table: QuIP and Contribution Analysis compared.

 Process tracing

The QuIP can be viewed as one way of gathering additional evidence to test prior explanatory theory. Unprompted positive explicit evidence of attribution generated by the QuIP can be likened to “smoking gun” evidence of impact in a particular context-mechanism-outcome (CMO) configuration: significantly increasing confidence in the applicability of change theories behind the intervention. Positive implicit evidence is more akin to “hoop test” evidence, its presence is less conclusive, but its persistent absence would cast doubt on whether the intervention is working as expected (Punton and Welle, 2015). Viewed as a process of “Bayesian updating” (Befani and Stedman-Bryce, 2016) the accumulation of evidence can also potentially be used to judge whether the number of interviews and focus groups is sufficient. For example, if it is feared that rising profitability of cash crops might result in children being taken out of school to work on them, and if prior expectations of this are neutral, then a judgement can be made on how many negative results (i.e. that don’t mention such an effect) would be sufficient to assuage the concern. In this and other instances, the role QuIP studies can play in process tracing is strongly enhanced by the strength of complementary evidence of change in key outcome variables, and this reinforces the argument for nesting use of the QuIP within a mixed method evaluation strategy. The table below further compares QuIP with process tracing by relating it to ten “best practices” set out by Bennett and Checkel (2015:261). The QuIP also chimes with their argument for greater transparency with respect to the procedures used to collect and analyse evidence, and call for a “(partial) move away from internally generated practices to logically derived external standards” (p.266) without at the same time removing entirely a more exploratory “soaking and poking” of available evidence. See more in this comparative table: QuIP and Process Tracing compared.

 Outcome Harvesting

Outcome Harvesting (OH) provides another interesting point of comparison: more so indeed than its name (cleverly dodging the words impact and evaluation) implies. It can be defined as “an evaluation approach that does not measure progress towards predetermined outcomes, but rather collects evidence of what has been achieved, and works backward to determine whether and how the project or intervention contributed to the change.” (UNDP, 2013, p.5)

This brief comparison draws primarily on a summary of the approach produced for the Ford Foundation in 2012 by Ricardo Wilson-Grau (the main originator of OH) and Heather Britt. A striking similarity with QuIP is the emphasis on garnering useful evidence of change and its causal drivers by working back from outcomes to activities of the commissioning organisation (referred to as the “change agent”) rather than forward from the activities that it wishes to assess. Second, they also emphasise the usefulness of this approach to assessing outcomes in complex contexts where many factors and combination of factors may lead to many outcomes (positive and negative, anticipated and unanticipated), and where relations of cause and effect are not fully understood. Third, and linked to this, OH shares with QuIP an emphasis on the usefulness of gathering credible evidence of contribution, without necessarily being able to estimate precisely how much of a given outcome can be attributed to a specified activity. Indeed implicit in both approaches is recognition that aspirations to measure change and attribute outcomes too precisely may even be an obstacle to a broader and more reliable assessment of causal processes associated with the activities being assessed.

These commonalities with QuIP, allowing with more detailed differences, can be elaborated by looking in turn at the six iterative steps of Outcome Harvesting: (1) Design, (2) Identification and drafting of outcome descriptions, (3) Engagement with change agents in finalising these, (4) Substantiating outcome descriptions through consultation with independent agents, (5) Analysing, interpreting and making sense of the evidence, (6) Engagement with potential users of the findings. See more in this comparative table: QuIP and Outcome Harvesting compared.

 

This brief comparison suggests that the values and philosophy underpinning Outcome Harvesting and QuIP are very similar. In aspiring to produce evidence that is credible and useful to actors in complex contexts both implicitly counsel against pursuit of universal truths and perfectionism (including spurious precision, or what Mansky calls “incredulous certitude”). Both also recognise the limitations of having to rely on the cooperation and perception of stakeholders in any change process, but also appreciate the ethical as well as practical benefits of eliciting and comparing multiple perspectives. Both distinguish between evidence of change (‘outcomes’) and evidence of drivers of change, and favour starting with the first and working back to the second.

There are also significant differences. While OH is more detailed and prescriptive than Outcome Mapping (see footnote 13 of UNDP, 2013) it is significantly broader in scope than QuIP – e.g. in addressing recurring monitoring needs alongside the need to assess impact of specific interventions. QuIP is also more narrowly focused on securing the feedback of intended beneficiaries, in a way that it more transparent and open to auditing by third parties. OH, in contrast (and like process tracing) appears more tailored to assessing how individual efforts: e.g. in advocacy, campaigning and policy engagement.

Overall, the key point is perhaps that these are all mutually affirming approaches that belong to a broad family of more qualitative and interpretive approaches to assessing change. For all the confusion of terminology and acronyms there is much to be gained from the existence of a plurality of approaches to assessing change. Attempts to list, review and classify different approaches more systematically for different fields can be useful, but if we accept the benefits of practice that is attuned to diverse, complex and evolving needs then we should neither expect nor hope that any overarching review will ever be definitive.

References

Befani, B with M O’Donnell (2017) Choosing appropriate evaluation methods: a tool for assessment and selection. London, Bond. https://www.bond.org.uk/resources/evaluation-methods-tool

Befani, B. and Stedman-Bryce, G. 2016. Process tracing and Bayesian updating for impact evaluation. Evaluation, 22(4), October.

Bennett, A. and Checkel, J. 2015. Process tracing: from metaphor to analytic tool. Cambridge: Cambridge University Press.

Copestake, J. 2014. Credible impact evaluation in complex contexts: Confirmatory and exploratory approaches. Evaluation, 20(4), 412-427. doi: 10.1177/1356389014550559

Copestake, J., Allan, C., van Bekkum, W., Belay, M., Goshu, T., Mvula, P., Remnant, F., Thomas, E., Zerahun, Z. 2016. Managing relationships in qualitative impact evaluation to improve development outcomes: QuIP choreography as a case study. Bath: Bath SDR Ltd, working paper. Accessed from qualitysocialimpact.org on 31 August. 2016.

Copestake, J. and Remnant, F. 2014. Assessing Rural Transformations: Piloting a Qualitative Impact Protocol in Malawi and Ethiopia. In: K. Roelen and L. Camfield, eds. Mixed Methods Research in Poverty and Vulnerability: Sharing ideas and learning lessons. Oxford: Routledge. Also available at: http://www.bath.ac.uk/cds/publications/bpd35.pdf

Gawande, A. 2008. Better: a surgeon’s notes on performance. London: Profile books. See also The Positive Deviance initiative, basic field guide to the positive deviance approach. www.positivedeviance.org.

Manzano, A. 2016. The craft of interviewing in realist evaluation. Evaluation, 22(3):342-360, 2016.

Mayne J. 2012. Contribution analysis: coming of age? Evaluation 18(3):270-280.

Pawson, R. 2013. The science of evaluation: a realist manifesto. London: Sage.

Punton, M. and Welle, K (2015) Straws-in-the-wind, hoops and smoking guns: what can process tracing offer to impact evaluation? Centre for Development Impact, Practice Paper No.10. April.

Spaggiari, L (2016) Guidelines on outcomes management for investors. Guidelines for integrating outcomes measurement, analysis and reporting into operations of inclusive finance investors, asset managers and other funders. European Microfinance Platform and Social Performance Task Force. https://sptf.info/images/Guidelines-on-Outcomes-Management-for-Investors.pdf

UNDP (2013). Discussion paper. Innovations in monitoring and evaluating results. November. Edited by Jennifer Colville. [Jennifer.colville@undp.org]

Wilson-Grau, R. & H. Britt (2013) Outcome harvesting. Produced for the MENA Office of the Ford Foundation. May 2012 (revised November 2013).