News & Views

Aid impact assessment and agricultural change – Researching ‘good enough’ qualitative approaches

By James Copestake (Originally posted at Devlog@Bath)

Using public money to reduce global poverty is a tough enough ‘task’ even without having to account for each pound spent every five minutes. But aid professionals can hardly claim to be less susceptible to self-serving group-think than anyone else, and indeed the case for strong reality checks on aid expenditure will remain particularly strong so long as the power and influence of those it aims to assist remains weak. How then to generate evidence on aid impact that is reliable, affordable and useful?

One reason why this is such a tricky question is that good answers will be tailored to suit different types of aid and contexts. For example, Susan Johnson has already reported in this blog on a DFID sponsored meeting on impact assessment of aid for financial sector development that took place in January. My focus here is instead on agricultural development, and more specifically a new CDS action research project into methods for assessing NGO impact in the context of complex rural transformations in Africa.  Assessing Rural Transformations (ART) is jointly funded by DFID and ESRC, and is being implemented with three development NGOs. Farm Africa and Self Help Africa work across ten African countries to reduce poverty by promoting sustainable smallholder agriculture. The third, Evidence for Development, has developed a methodology for monitoring household level changes in food and economic security called the Individual Household Method or IHM. The primary goal of the ART project is to develop a qualitative impact protocol (or QUIP) for assessing the reasons behind changes in household economic security measured using the IHM, as well as the extent to which Farm Africa’s and SHA’s activities contribute to this. Once designed, the QUIP will be piloted over a three year period by using it to assess the impact of two projects in Ethiopia and two in Malawi.

The methodological question at the heart of the research is how far it is possible to elicit good enough evidence of impact directly from the reported statements of direct project participants as an alternative to statistical analysis of differences between them and a ‘control’ sample of non-participants. The ‘statistically inferred impact’ approach can generate quantifiable estimates of average impact, but scope for generating insights into variation in impact around the mean is heavily constrained by sample size requirements. The ‘self-reported impact’ approach has the potential to generate a more flexible and fine-grained picture of differentiated impact, but is perhaps also more susceptible to bias. The project will explore how biases can be identified and mitigated through careful structuring of the interviews and triangulation against monitoring data collected independently using the IHM.

On 13th March, two dozen staff from a range of development agencies attended the ART project launch in London. A short presentation of the project prompted a lively discussion with plenty of good and challenging questions! This is summarised below in the form of considered responses to ten of the questions raised. The next step in the project is the QUIP design workshop to be held in May 2013. Comments and suggestions prior to this are particularly welcome, please add your comments to this blog, or email j.g.copestake@bath.ac.uk to continue the conversation…

1. Why does the IHM focus on household income rather than consumption, particularly when the correlation between household income and nutritional outcomes are known to be weak?

Income remains an unavoidable starting point for thinking about the capacity to consume. Consumption is no easier to measure than income – indeed arguably even harder. It is possible to extend or integrate IHM with other data collection methods to address specific nutrition questions.

2. IHM appears to address many of the standard problems that arise when assessing income (e.g. the importance of casual labouring). But what are the limitations of reliance on respondent recall?

Use of IHM to date has relied on recall about income over a whole year. The key to reducing recall bias is the way the interview is structured person by person, season by season, activity by activity. Much of the literature on recall is not specific enough in its explanation of how questions being examined are framed, and this undermines the validity of their conclusions. Experience of using IHM with poor rural respondents across Africa is that their recall is often astonishingly impressive, perhaps because the questions refer to their core life and livelihood activities.

3. What complementary data from secondary sources, including other household surveys, is being incorporated into the research to aid triangulation?

IHM and QUIP are intended as stand-alone methods for addressing questions about specific development interventions, at a level of specificity that general household surveys cannot directly address. However, collection of secondary data and other contextual information is also important – e.g. when assessing the relative poverty status of households at baseline. Comparing observed changes in key indicators (e.g. crop production and income) for project participants with those collected through household surveys across a wider population can also be a useful way of assessing the impact of specific projects.

4. Why not include control group interviews as well?

IHM has indeed widely used whole village sampling so as to generate understanding of programme exclusion and spill-over effects from one household to another. In contrast, the ART project seeks to avoid the need to collect data from control groups for impact analysis – the strategy being to rely more on the counter-factual implicit in each respondents’ own explanation for the livelihood changes they experience. This is at least in part because we are seeking to avoid the ethical problem of requesting time and data from households with no direct stake in a particular project. Nevertheless including some non-participating households will increase the range of ‘exposure variation’ within samples and this may well generate useful evidence to support impact claims: the QUIP approach should reduce the need for control or comparison group interviews, but need not exclude them.

5. If the QUIP cannot produce quantitative estimates of impact how helpful will it be? Will the QUIP not just produce circumstantial evidence of association rather than causation?

Understanding how and why some interventions work and others don’t for different people is far from trivial; and spuriously precise or misdirected quantification can also be a major weakness in impact assessment. For example, reliable evidence that activity X contributes to outcome Y for household of Type A but not B is valuable even if it falls short of quantifying precisely how much of Y can be attributed to X overall. Distinguishing between association and causation is a problem for all approaches to impact assessment, and one that more qualitative approaches can address by gathering detailed evidence of causal mechanisms.

6. How far is the project explicitly challenging conventional wisdom about attribution?  Why not use a mixed approach and combine the QUIP with randomized control trials (RCTs)?

While some may see RCTs as the ‘gold standard’ there is probably a wider consensus in favour of using a range of methods to serve different purposes according to context. Selecting appropriate horses for courses entails making trade-offs between internal, external and construct validity. Our premise is that approaches that rely more on self-reported attribution have a potential comparative advantage compared to other methods in addressing construct validity and also revealing within-sample differential impacts more cost-effectively. This does entail finding a way for interviews to elicit plausible statements (i.e. ones that are consistent both internally and with other data) about change relative to what would have happened (i.e. relative to an explicit or implicit counterfactual). One challenge here is that framing questions too explicitly around a particular project risks introducing pro-project biases. To mitigate this it is clear that QUIP must be conducted by evaluators who are independent of the NGO, and are alert to distinguishing between responses which volunteer attribution to specific drivers rather than being prompted to do so. The key here is good interviewing practice.

7. What forms of triangulation to reduce response bias are you planning to build into QUIP?

Self-reported attribution will be triangulated with secondary data (e.g. about contextual shocks) and with quantitative evidence on changes in household economic and food security obtained using the IHM. Cognitive debriefing of interviewers is also an important form of triangulation.

8. How far has or will thinking about the QUIP be informed by cognitive psychology?

This is indeed important – Kahnemann’s work on cognitive bias, for example. Of course, collection of so-called objective data is not immune from such biases, but asking more complex ‘what if’ questions is likely to be more challenging. It is partly for this reason that our focus is on relatively straight-forward questions of food and economic security rather than more complex indicators of wellbeing such explored by the CDS Wellbeing in Developing Countries (WeD) and Pathways out of Wellbeing and Poverty projects at Bath, both of which involved collaborative work with psychologists.

9. What potential is there for the IHM and QUIP to contribute towards measuring household resilience and their ability to manage risk?

The IHM is designed to assess resilience both by revealing how diversified their income sources are, and by permitting modelling and simulation of the impact of shocks on household income and food security. The ART project also aims to measure household income over three successive years and this may permit some analysis of sources of income variability over time as well as between households. Respondents may reveal attitudes towards risk and how it can be managed through the QUIP interviews, but it is not something we are addressing explicitly.

10. How far can the research go in addressing issues of empowerment within households and at the community level, particularly of women?

SHA, Farm Africa and the IHM do place a lot of emphasis on household level indicators, as do cooperatives and other local organisations with whom they work. But reliable data on income sources does provide an important foundation for deeper questioning about intra-household decision making, resource allocation and outcomes. The QUIP focuses on individuals, and one option is to use it to explore differences in priorities, perceptions and wellbeing outcomes between members of the same household.


[1] Thanks to Maren Duvendack (ODI), Claire Hutchings (Oxfam), Jonathan Finighan (ALINe), Richard Ewbank (Christian Aid), Ana Marr (NRI), Kate Wellard (NRI), Barbara Befani (IDS), Ajay Sharma (DFID) and Matthew Powell (OPM) for the questions.

Comments are closed here.