Testing the validity of judgements about geography essays using the Adaptive Comparative Judgement method

Adaptive comparative judgement (ACJ) is an alternative to marking that presents judges with pairs of students’ work and asks them to decide, holistically, which piece of work contains more of a specified trait or set of traits. There are a number of reports on the highly reliable rank orders achieved using ACJ. However, none of these reports addresses the validity of the criteria on which judges base their decisions. The reliability of the rank ordering of 564 AS-level geography essays by 23 teachers or examiners of geography was reported previously. The judges in this study were asked to use their professional judgement when making decisions about essays; they were not provided with mark schemes or assessment objectives, but two importance statements were made available to them. After each judgement (92.4% of the total), the judges in this empirical study made notes about what was to the forefront of their minds when they made a decision between two essays. The investigation reported here uses thematic analysis of these notes to identify and test the validity of the criteria the judges used to make their decisions. On the whole the judges used the language of the mark scheme and the assessment objectives when describing the knowledge demonstrated in the essays. They used language from these two sources to a lesser extent when describing skills, but nonetheless indirect links could be established between the content of the notes and existing documents. These links demonstrate the use of existing shared criteria by the judges, thus the validity of the criteria used by judges in their decision-making was confirmed. However, these criteria are already established as part of examiner training, marking and teacher support. This has implications for the introduction of ACJ as a replacement for marking, which are discussed.

Export to citation manager (RIS File)

Share this page