Scaling new heights

Comparison

Kate Kelly reports from a special seminar hosted by CERP to examine how comparative judgement might be used in assessment

Although comparative judgement (CJ) has its roots in the 1920s, it’s really only in the last 25 years that researchers have considered how to apply it to educational assessment in the UK. The principle behind comparative judgement is straightforward: by repeatedly comparing objects against each other in terms of some factor of interest, you can create a highly reliable scale for that factor. It’s a simple idea with enormous implications, and it’s been gathering steam to become one of the hot topics of 2015.

Here at CERP, we’ve been considering how to use CJ to improve procedures for marking and grading – and we’re not the only ones. On the 11 June, we gathered some of the UK’s leading experts for a seminar on the use of comparative judgement in assessment.

Alastair Pollitt, a pioneer of CJ in English educational assessment, kicked off proceedings with a fascinating talk that outlined the history and theory behind CJ, and explained the mathematical strength and simplicity of the approach. Alastair explained how, as well as achieving highly reliable scales, pooling the judgements of experienced professionals should also result in very high validity.

Next up was Cambridge Assessment’s Tom Bramley. Tom’s wealth of experience supplied a thought-provoking overview of the practicalities of using CJ. Tom outlined some vital considerations, from the technical (which software to use for the analysis) to the more philosophical (what information should judges be given?).

Claire Whitehouse flew the flag for CERP and presented her intriguing study into the validity of Geography examiners’ comparative judgements. Claire highlighted that, for a judgement to be valid, it must be based on differences in the extent to which the assessment criteria are met.  Fortunately, Claire found that most judgements related clearly to the mark scheme, with minimal impact of irrelevant features such as handwriting.

Rounding off the afternoon was Chris Wheadon, founder of No More Marking, with a demonstration of one of his website’s new features. When you add new information to a CJ exercise, the values on the scale will change. This isn’t always useful, for example, if you have used those values to set a cut-score. Chris showed us how anchoring can resolve this problem, while still incorporating the new information.

It’s clear that while CJ has enormous potential, there remain a number of unanswered questions. But as they say, ‘if we knew what we were doing, it would not be called research, would it?’ There’s still a lot of work to be done before we know how best to implement CJ, but it’s also an exciting opportunity for CERP to be involved in the development of a potentially revolutionary approach. After all, the chance to create new knowledge is why most of us became researchers in the first place

Kate Kelly

Share this page