Assessing ‘what works’ in school

14 January 2014

Assessment for Learning is regarded as a success in UK education policy. But Rob Coe says that it is hard to implement and may be taking too much of the credit.

Close-up of students' hands writing at deskFor those working in education, and perhaps for many others, school improvement must be one of the most important goals we could work towards. What matters more than this? And for those who follow the evidence from educational research, it seems that improving the use of assessment is one of the most powerful approaches to achieving it.

Closing the gap between the most effective uses of assessment in schools and ‘standard practice’ is likely to make more difference to learning outcomes than pretty much any other change we could make.

The main part of this argument was set out by Paul Black and Dylan Wiliam in their 1998 publication Inside the Black Box. The argument was widely accepted across the UK and elsewhere, and was supported by teachers and by governments. In England, Assessment for Learning (AfL) became a major policy focus, with substantial funding for training and resources. It was the ultimate example of ‘evidence-based policy’, an initiative backed by really strong research evidence for its effectiveness, embraced by teachers, and supported by government policy and funding. Since then, research evidence for the impact of a range of formative assessment approaches has been further strengthened in later reviews, such as John Hattie’s Visible Learning and the Education Endowment Foundation’s Toolkit.

So we might be forgiven for thinking that the steady rises in, for example, GCSE performance in England could be attributed, at least in part, to the implementation of AfL? Unfortunately not.

In a recent essay, Improving Education, I argued that the increases we have seen in performance on national tests and examinations are at least partly, probably mostly, and possibly entirely, a result of grade inflation rather than real improvement. Other comparable but relatively well-anchored national assessments, such as PISA and TIMSS, do not show the same increases as GCSE. Rises are seen across the three nations that use GCSEs (England, Wales and Northern Ireland), despite differences of context and policy, but not in the Scottish examination system (Change over time in the context, outcomes and inequalities of secondary schooling in Scotland, 1985-2005), despite many similarities of context and in policy interventions. Even if there has been a real rise in attainment, it is almost certainly much smaller than the research studies would have suggested we could expect from implementing formative assessment. And if there has been a rise, there would be many competing claims from other changes which might be credited for the improvement.

This analysis provides a real challenge to the simplistic view that the impact of interventions that we find in research studies can be translated into large scale, system-wide improvements. If evidence-based practice means doing what the evidence says works, then we have done this, and it didn’t work. How can this be?

A number of recent blogs (e.g. PragmaticReform, LearningSpy, and TeachingBattleground) have pointed to the trivialisation of the key messages of AfL into what actually happens in classrooms. Important but complex ideas, such as improving the use of classroom questioning or helping learners to understand success criteria, get turned into WALT, WILF, mini-whiteboards and traffic lights, applied indiscriminately and ineffectively. Others have described how the government’s official ‘AfL policy’ distorted AfL into something else (The misrepresentation of Assessment for Learning – and the woeful waste of a wonderful opportunity; Think you’ve implemented Assessment for Learning?).

One of the key lessons is that it is not enough just to know what effective practice is: we have to have a clear strategy for implementing the change from current practice to improved practice. An intervention that works in a research study in which the developer is involved may not work as well when it is scaled up. Encouragingly, the interventions that are currently being evaluated by the Education Endowment Foundation are all required to be potentially scalable.

Another lesson to take away is that changing what teachers do, in ways that are faithful to intentions, sustainable in real classrooms and genuinely effective for pupil outcomes, is very, very hard. In most cases, such change will be the result of intensive, sustained professional development, with expert external input, and opportunities for practising techniques in a supportive coaching environment. Unfortunately, opportunities for this kind of professional development for teachers seem to be extremely rare. Although I am delighted that there has recently been a real increase in the emphasis given by practitioners and policymakers to evidence (see CEM and EBE: Some history), I suspect it will make no difference to pupil outcomes unless we can somehow enable teachers to engage in the kinds of professional development that are likely to improve practice.

I am the biggest fan of evidence-based practice. People who have to make decisions about practice or policy should certainly understand and take account of the best available evidence. But this is not enough; if we want to see real improvements we need to think a bit harder about what else needs to happen to make ‘what works’ work.

Robert Coe is a professor in the School of Education, Durham University.

Share this page