Judging the efficacy of artificial exemplar material

Before any examiner can mark national examinations in England they must be trained to use the mark scheme in a process called standardisation. The selection of exemplar material from live scripts for use in standardisation is time consuming and often fails to unearth a full range of candidate responses. This study investigates the possibility of improving the process by generating artificial exemplars in advance of the time-critical period. As a first step, it investigates whether there is any detectable difference in the quality of an exemplar dependent upon how it is created. Four different conditions are included: artificial exemplars written by the Principal Examiner in advance of the examination being sat; artificial exemplars written by the Principal Examiner after the examination is sat; standardisation exemplars selected in the traditional manner; and randomly selected live exemplars.

The study concludes that there is no apparent difference in the perceived quality of response dependent upon condition. However, exemplars randomly selected from the live scripts may be harder to judge. It is recommended that future research should focus on whether artificial exemplars can give rise to comparable, or higher, levels of marking reliability.

Export to citation manager (RIS File)

Share this page