All change, please

The recent replacement of the ministerial team at the Department for Education neatly symbolises just how much change the education sector in England is experiencing. Changing times call for evidence-based discussion of basic issues in assessment and public examinations, says CERP’s new Director, Alex Scharaschkin.

Students writing at deskTalking about the ubiquity of change, the Greek philosopher Heraclitus famously said ‘you can’t step into the same river twice’. Having myself just returned to a full-time interest in educational assessment after a spell working at the National Audit Office, I can certainly confirm that it feels like I’ve stepped into a very different ‘river’ from the one I stepped out of several years ago. But just as different rivers share some basic characteristics, so the basic issues relating to assessment, and in particular assessment via public examinations, remain as pertinent as they have ever been. In particular, the questions of what is assessed, and how it is assessed, still motivate much research on validity and reliability that previous commentators in this series have highlighted. 

But what seem to me to have become notably more prominent in public debate, since I was last working in this area, are the fundamental questions of why assessment is carried out (to inform? to select? to hold to account?); and who is being assessed (pupils? teachers? government policies?).

This means that even more importance attaches to issues such as how we define ‘validity’ in the first place (see Paul Newton’s comment piece on this website); and how we think about the relationship between the design of qualifications such as GCSEs and A-levels, and the wider accountability systems of which they form a part.

It means thinking about when assessment is done: how our tests relate to more formative assessments that are used to inform the learning process during students’ educational careers. This use of assessment requires more in-depth reports, but also different approaches to test administration and analysis. 

It also means keeping a focus on the basic assessment issues that sit underneath the day-to-day business (and busyness!) of dealing with changes to new GCSEs and A-levels, such as:

  • What model of measurement should guide the way we assess and award these kinds of qualifications? How do we assign meaningful value to the performances that students produce in response to them? What does that mean, in turn, for how we design mark schemes and train exam markers? Or should we be using other approaches to rank-order students’ performances?
  • ‘Performance standards’ (such as ‘the typical grade 5 performance in GCSE English’) are necessarily imprecise concepts, but does that mean they are meaningless? Is there a way we can enhance the meaning of the different levels, perhaps by learning from rigorous ways of working with imprecise concepts in other fields, such as mathematics and computing?
  • Given the importance, for so many different stakeholders, of precisely how marks on the new qualifications will be converted into grades, can we do more to ensure that the procedures awarding bodies are obliged to follow to use expert judgement and statistics to guide those decisions are as sound as possible? Can we draw more on research, from CERP and others, on the advantages and disadvantages of different kinds of evidence, and on research about combining evidence of different strengths to arrive at overall judgments?

This is the 100th Perspectives piece we’ve published on the CERP website. As we enter changing times – the next phase of the English public examination system, and the next century of Perspectives – I look forward to the debate on these issues. Changing approaches, if they are to be effective, must be informed by a solid research base. CERP, with its broad expertise in both assessment theory and practice, is uniquely placed to contribute to that. 

Alex Scharaschkin is the Director of the Centre for Education Research and Practice

Share this page