Compare and contrast
What does that innocuous ‘%’ symbol really mean? Kate Kelly ponders percentages
‘60% of the time, it works every time’ – Brian Fantana, Anchorman
Exams might be over for the students, but there are papers to mark and results to process. CERP plays a pivotal role in setting AQA’s grade boundaries. It’s a labour-intensive process. Chocolate consumption skyrockets.
This year, I’m trying to eat more healthily. Perhaps I should swap my usual chocolate bar for some raisins? Let’s consider the nutritional information: a Cadbury Dairy Milk bar contains 54% sugar, whereas Sun-Maid raisins contain 71.4% sugar (source: Sainsbury’s). The chocolate bar would seem to be the winner if I want to lower my sugar intake. However, the percentages only tell half the story. A snack-size serving of Sun-Maid raisins is 14g while an average chocolate bar is 45g. So if I were to eat the chocolate bar, I’d consume nearly 25g of sugar, whereas if I opt for the raisins, I’d only consume 10g of sugar.
That’s the trouble with percentages: they are very simple to calculate, but they can be devilishly tricky to interpret. The joy of percentages is that they allow you to draw direct comparisons across different things – like chocolate and raisins – but it is easy to forget that it is proportions that are being compared, not quantities. That little ‘%’ sign can easily trick you into forgetting that comparisons are not always like for like.
Take grade outcomes: GCSE and A-level results are almost always reported as cumulative percentages, and schools, parents, students and journalists tend to use the published results to comment on differences in performance. For example, a school may consider whether their students are more likely to get a C if they enter for GCSE Fictitious Studies in January instead of June. If the percentage achieving A or A* was 1% in June and 5% in January, it could be tempting to conclude that the January series was easier because a greater proportion of students obtained the higher grades.
However, the percentages obscure the fact that we may be comparing two very different cohorts of students. Generally speaking, far fewer students enter their examinations in January than in June, so we are probably comparing a very small cohort with a much larger one. Moreover, students who enter in January are usually taking their exam after a shorter period of study, so the two cohorts are likely to perform differently. If students have been entered early because they are particularly good at the subject, a higher outcome – and thus percentage – for the January series would be expected. A simple comparison of the outcomes tells us nothing about the relative difficulty of the two exam series.
The same logic applies to any comparison of outcomes – whether year-on-year changes, differences between awarding organisations, or comparisons across different subjects. To make any meaningful interpretation, we need to know something more about the data. And of course, this doesn’t just apply to percentages. When people complain about statistics being misleading, it is often because the statistic is presented without the necessary context to interpret it.
It is also worth remembering that any statistic is simply a numerical description; statistics alone cannot be used to explain any sort of outcome or trend. When statistics are invoked to explain a difference, it is not the figure itself that is important but the conditions surrounding the underlying data. This is why it is so important that experiments are well designed.