Shampoo surveys and intervention research

Not all research is equal, so how can you tell the good studies from the bad when looking at the evidence for an educational intervention?

Woman choosing shampooA quick internet search for education intervention programmes brings up a bewildering multiplicity of schemes – for maths, for literacy, for special needs students – all designed to improve performance on some kind of educational outcome. So when faced with choosing a programme, research evidence can be a powerful ally, guiding you towards the interventions that work and away from the expensive time-wasters. But beware, it’s not as straightforward as it might seem.

We’re all familiar with ‘shampoo’ research, where adverts claim that ‘90% agree our product leaves their hair shiny’ based on a sample of 12 people. But the issues affecting research into the effectiveness of interventions are often far subtler. For the lay person, such issues are easy to overlook, and present a particular challenge for those tasked with disseminating work to the public accurately. So what should you be aware of when looking into the evidence for interventions?

The most important question is: does the study have a control group? A lot of studies test a group of pupils before and after they receive an intervention, and any significant positive change is considered proof of effectiveness. But would those pupils have improved anyway? After all, they are usually still receiving normal classroom teaching as well as the intervention. In shampoo terms, would their hair have been shiny even if they hadn’t used that brand of shampoo? Would any type of shampoo leave their hair shiny? Without drawing comparisons with a similar group of people who hadn’t used that shampoo, or received that intervention, there is no way of knowing. Those people are the control group.

The matter of exactly who is included in the study can also be tricky. In some research, only those pupils who complete the whole programme are included. This is usually very sensible, since the dropouts didn’t really receive the intervention. But it’s important to ask why they didn’t finish. Are pupils for whom the programme is not working removed halfway through? Was the programme having negative effects on those pupils? And crucially, how many dropped out? Imagine a shampoo survey is based on results after three washes. The shampoo is found to be very effective for the 10% who used it three times. However, the other 90% could not complete the survey as their hair fell out after one wash. Would you consider the shampoo successful?

When choosing an intervention, it’s not enough just to check that the research behind it is sound. It’s also worth considering who the intervention works for. Even if all the participants are included in the results, it’s unlikely that the intervention is equally successful for all of them. If your chosen intervention has no effect on those who you most want to target, but is very helpful for everybody else, it is unlikely to be an efficient use of your resources. Similarly, applying a proven intervention on a completely different group of people may not yield the same results – rather like using shampoo in the dishwasher.

This is only a brief outline of a few of the potential pitfalls in using research to choose an educational intervention. But hopefully it has highlighted some of the complexities involved in making evidence-based decisions. Evidence-based policy is an admirable goal, but the quality of the decision is dependent on the quality and appropriateness of the research used to make it.

Kate Kelly

Share this page