News & Views

Dancing around the bias problem

Whenever James Copestake or I present the QuIP methodology to a new audience we are always prepared for the first question: “Is the idea of blinding the field researchers ethical?”. Having now danced around the ‘means to an end’ response for a couple of years we are pleased to publish our latest working paper on this very subject: Managing relationships in qualitative impact evaluation to improve development outcomes: QuIP choreography as a case study.

Evaluation choreography – or who knows what when through the process of impact evaluation – has an important influence on the credibility and usefulness of findings. The credibility question was uppermost in our minds when the Assessing Rural Transformations (ART) team sat around a table to design the QuIP approach in 2013. The aim was to collect rich, qualitative data directly from intended beneficiaries about most significant changes in their lives, but we challenged ourselves to try to mitigate a long list of potential biases, both evaluator, respondent and analyst – drawing on White and Phillips’ own lists (2012).

Our response to some of the evaluator biases was to ensure that all QuIP studies only used local field researchers who speak the local language, and who use lists of respondents provided by the Lead Researcher (usually based on purposive, randomised sampling), no matter how hard to reach. Photographs sent by a team on a recent study in Ethiopia show how challenging access to some areas was during the rainy season, but they persevered!

helping to cross filled river

Accessing the least accessible places

Respondent bias was very important in the context of the ART project, where causal inference may rely on people’s ability to assess attribution. Biases to consider include confirmation (a tendency to over-exaggerate the significance of the project you are discussing), courtesy (telling interviewers want they want to hear) and social acceptability (playing to a prevailing consensus). Field researchers can also unconsciously contribute to confirmation bias, prompting for specific responses they may be expecting based on their knowledge of the intervention. This is where we introduced the concept of ‘blinding’; briefing field research teams without any reference to the commissioner of the evaluation or the project being evaluated.

Finally, by using independent analysts (and a consistent coding system) to code the data, fully briefed in all aspects of the project but who have no connection to the project or the commissioner and therefore no vested interest in ‘success’, we try to address Kay’s ‘teleological fallacy’ (2011); the tendency to see patterns and causal relationships where there are none.

We anticipated a range of potential issues using this approach, but as the paper details, the experience over four different studies in Malawi and Ethiopia was very positive. Our experience in the three recent studies we have conducted following completion of the ART project (two on a much larger scale) have been equally successful, with both researchers and respondents willing to commit to the study despite not holding all the pieces of the jigsaw puzzle. Fears that no useful or relevant information would be volunteered have been unfounded; instead it has been satisfying to uncover issues which may not have come to light in other circumstances, prove positive causal links where the evidence suggest they exist, but also highlight gaps where no such links can be proved.

The paper explores the choreography of this complex web of relationships from technical, political and ethical perspectives. We suggest that double blind interviewing can be justified as a means to a less biased end (and thereby a more honest and constructive evaluation which donors may take more seriously), but also that a staged ‘unblinding’ at the end of the project can serve as a form of triangulation as well as an opportunity to be more open with researchers and intended beneficiaries.

We conclude that these steps can enhance credibility of evidence and that ethical concerns associated with blinding can be addressed by being open with stakeholders about the process, contributing to a more deliberative and less rigid style of development practice.

For more on the QuIP approach, see our Resources page, and particularly the QuIP Briefing Paper which serves as a good introduction.

 

Kay, J. 2011. Obliquity: Why our goals are best achieved indirectly.  London: Profile.

White, H. and D. Phillips. 2012. Addressing attribution of cause and effect in small n impact evaluations: towards an integrated framework, International Initiative for Impact Evaluation: 74. Available at: http://www.3ieimpact.org/media/filer/2012/06/29/working_paper_15.pdf

 

Comments are closed here.