Coding and visualisation

Data Analysis

A common issue with qualitative impact assessment is how to organise and make sense of large quantities of textual data, and to do so in a way that is transparent, so that generalisations drawn from it can be peer reviewed. Various software packages are available on the market to assist with the task, and most can be used with the QuIP coding system, with some preparation to set the systems up appropriately. At BSDR we have invested in our own bespoke data analysis solution which is designed to speed up and standardise the coding and analysis process for our reporting purposes, but this does not preclude others from using the QuIP coding approach in other qualitative analysis software.

Where possible we use a three-step approach to coding, employing standard routines to aid speed and transparency:

1 – Attribution

2 – Drivers of change

3 – Primary outcomes, secondary outcomes, and tertiary outcomes

Data management and visualisation

The QuIP coding system gives large quantities of rich and ordered data on the complexity of people’s lives and the key influences shaping them. In order to manage, analyse and present this data in a meaningful way, that ensure the respondent voice is kept at the heart of the process, BSDR uses an analytics platform called Microstrategy.

Microstrategy allows us to:

  • analyse trends
  • filter by attribution
  • filter by drivers and outcomes
  • analyse relationships between drivers and primary, secondary and tertiary outcomes
  • filter by respondent characteristics (gender, location, economic status, etc)
  • visualise data to help report on complexity

To read more about QuIP visualisations and coding, please read our paper: QuIP and the Yin/Yang of Quant and Qual: How to navigate QuIP visualisations

 

Attribution

Unlike the field researchers, QuIP data analysts need to be fully briefed about details of the project. This is because their first coding task is to assess how the data relates to the project’s theory of change by systematically identifying cause-and-effect statements embedded in it according to whether they (a) explicitly attribute impact to project activities, (b) make statements that are implicitly consistent with the project’s theory of change, (c) refer to drivers of change that are incidental to project activities. These statements are also classified by outcome domain, and coded according to whether respondents described their effects as positive or negative. These fixed ‘attribution codes’ are detailed below.

 

Drivers of change

The second level of coding is more inductive, with analysts identifying and grouping together more specific causes or drivers of positive and negative change by outcome domain, whether attributable to the project or not.

Outcomes 

The third, fourth, and fifth levels of coding identify selected outcomes (primary, secondary, and tertiary) of change in more detail than permitted by relying on the outcome domains used to structure interviewing. This in turn allows for more detailed analysis of different cause-outcome configurations which can be clustered together.

Once coding is complete it is possible to query the data set, looking for causal relationships and patterns which can be visualised in network diagrams. This allows analysts to see causal chain relationships between drivers and multiple related outcomes. The solution BSDR have developed produces tables and dashboards which make it possible to drill down into the source text behind each coded item. The data can also be exported in numerical form and further interrogated, providing a quick overview of the extent to which the data collectively validates or challenges the theory of change behind the project.

More detailed analysis is possible if QuIP data can be supplemented with information from project staff about each respondent’s precise involvement with the project (e.g. training, receipt of cash transfers or in-kind inputs). This then permits the coding to also reveal gaps in responses, or highlight areas where respondents have fared badly relative to that which might have been expected.