Key features

This page summarises some of the key features of QuIP, but if you want to read about any of these in more detail please refer to the QuIP Casebook – Attributing Development Impact. This is available in hard copy, but also as a free download.


The QuIP addresses the issue of how to attribute changes to different stakeholders or events, whilst minimising pro-project and other sources of bias, and avoiding the need to interview a control group. There are strong ethical grounds for asking people directly about the effect of actions intended to benefit them, and doing so can also contribute practically to detailed learning, innovation and wider accountability within your organisation.

However, this approach entails finding credible ways to address potential response biases. The QuIP does so by arranging for qualitative data collection to take place without any reference to the project being evaluated, ensuring that field researchers and respondents are not briefed on the project being evaluated and thereby reducing confirmation bias as much as possible up to the point of data analysis. The analysis is then carried out by a separate party, who is fully briefed and can therefore interrogate and code the data against the theory of change. The aim of separating these roles is to ensure that the analysis remains as independent as possible.

The QuIP approach also places a strong emphasis on the rigour of good purposive case selection in qualitative data collection, compared with the approach taken to representative sample sizes in quantitative studies. Good monitoring data can help to determine the number, geographical location and type of respondents you select, including what you know about variation; positive or negative deviance. QuIP studies offer a ‘deep dive’ into a selected group, and case selection should be based on expected saturation within a defined group or location. You can read more on sampling here.

A key problem with recording detailed feedback from beneficiaries is what to do with all the data? Qualitative information is difficult to process, analyse and condense into a credible and transparent report. We have developed a simple and semi-automated approach to coding QuIP data which allows for easy analysis and reporting, ensuring that reports are brief, readable and that the frequency of findings cited and the source data are completely transparent. The coding system is detailed in the briefing paper and here.

By creating a systematic approach, we have speeded up the time required to undertake the assessment – thereby reducing the costs considerably. Whilst it isn’t possible to fix a price on a QuIP as every country has different costs, it is safe to say that it will cost a fraction of the cost of randomised controlled trial, or any assessment requiring a control group. The following timings may help calculate approximate costs:

  • 1 week of design and preparation
  • 2 weeks of 2 field researchers’ time, both in the field and writing up
  • 2-4 weeks of analysis and reporting (depending on experience)

Development of the QuIP

An early version of the QuIP was originally developed and trialled by researchers at the University of Bath more than a decade ago. It was extensively redesigned and upgraded during the course of a three year ESRC/DFID funded research project at the Centre for Development Studies between 2012-2015. This project was called ‘Assessing Rural Transformations’ – known as ART – and involved working with two international NGOs, Self Help Africa and Farm Africa. We trialled the QuIP to assess the impact of four rural development projects in Malawi and Ethiopia, all projects which aimed to strengthen rural livelihoods and food security in the context of both rapid commercialisation and climate change.

The quantitative monitoring tool used in the ART project was Evidence for Development’s Individual Household Method. This measured changes in factors contributing to household disposable income relative to basic food needs.

The qualitative assessment was conducted using the QuIP, generating evidence of impact based on narrative causal statements from intended project beneficiaries. The QuIP looked for evidence of attribution through respondents’ own accounts of links between change in their lives and the activities or external factors they considered significant in that change. The output for each project was a brief report summarising the key drivers of change and highlighting evidence of links to the theory of change, or obvious gaps where links were expected. Coding and reporting has moved on significantly since these early days, but the main aim of QuIP studies remains the same.

Following the end of the ART research project, CDS staff set up Bath SDR to continue to develop the QuIP and provide a training and consultancy service. We have subsequently conducted a number of QuIP studies in a range of different countries and contexts.

If you are interested in conducting a QuIP we suggest that you browse our extensive Resources section, and contact us to check how we could support you. We run online training courses and provide bespoke consultancy.