Exam standards

Exam candidates, schools, universities, employers, and government all expect qualification standards to be reliable, valid and, where appropriate, comparable across years, subjects and different exam boards.

This is not, however, always easy to achieve. Over time, not only do the ability and number of candidates change, but most subjects evolve (some of what is currently assessed in GCE ICT, for example, was not in the specification a decade ago; Citizenship Studies is an entirely new subject). Even when subject content remains largely constant, the value attributed to different skills and assessment objectives often changes. In the debate about standards over time, current candidates’ alleged weaknesses compared to earlier generations are often cited; their relative strengths less so.

Moreover, maintaining and aligning standards is a multi-dimensional activity. Awarding organisations are primarily responsible for ensuring standards remain comparable between consecutive years, but it is also important to make sure that standards across different awarding bodies are maintained, and those between similar subjects.

A great deal of research has, therefore, addressed issues in this area.

Two sources of evidence are currently used to help maintain standards: the judgements of experienced senior examiners, and statistical information and modelling. A strong stream of research has focused on the fact that over time, statistics have become more reliable than examiner judgement. There is evidence that the awarding judgements of even experienced examiners are less reliable than was supposed, especially when exam specifications are being revised regularly. Research has shown that, whilst it is important to compensate for the inevitable fluctuations in the demands of exams (be they harder or easier), examiners tend not to be able to do so with sufficient accuracy.

Coincidentally, as this evidence started to mount, the availability of large, candidate-level data sets (thanks to developments in computer technology) has meant that statistical information has become an increasingly reliable and valid source of intelligence for determining awarding standards.

Looking to the future, the prospect of aligning standards by means of question banking and inserting common, ‘anchor’ questions in linked assessments is being considered. If implemented on a wide scale, this would not only change the way grades are awarded but, more importantly, would further increase the confidence in standards.

Ben Jones, Head of Standards

 

Share this page