Are the days of one-size-fits-all assessments numbered?

Schools might succeed in fostering creativity, innovation and independence in students – but today’s exam system struggles with such individualism, says John Gardner.

Are the days of one-size fits all assessments numbered? For some purposes, such as high stakes tests for assessing competence or for recruitment, the answer is probably no. What is being assessed in these examinations is what matters to someone else (the university, the employer). What is arguably most important to the learner, however, is that assessments should give appropriate recognition to their learning and understanding as expressed by them in their own ways.

Varieties of portfolio assessment have managed to sustain themselves as means of assessing individualised expressions of learning and they remain strong elements in some subject areas. Granted, portfolio assessment is often tailored to generalised criteria, even to standardised conditions of their development (I think here, for example, of the controlled assessment approach in the multimedia portfolios of Goldsmith College’s e-Scape project) but the essence of a more flexible assessment regime nevertheless resides quite naturally in the self-expression of a portfolio approach.

The question is: can more conventional examination systems improve considerably on how they accommodate self-expression and individualised learning? Can they assess modern curricula that are much more adaptable than in the past to personalised styles and aspirations in learning?

Square pegs, round holes?

The truth is that even if schools succeed in delivering a learning environment that successfully fosters creativity, innovation, entrepreneurship, autonomy and all the other key needs of 21st century learning, their students will inevitably face examinations whose basic design has not changed substantially in decades if not centuries.

Well-designed examinations try to maximise the coverage of the learning domain, and try to capture good information on the candidates’ knowledge and understanding of the area; but they use templates that have not changed in aeons – the most typical being multiple choice and structured questions. Less well-thought out examinations assess a student’s performance in a very much reduced sub-set of the learning domain dictated by poorly chosen questions. Essays offer some redemption for individualised performance but run the gauntlet of inappropriate subjectivity and even unreliability in assessors’ judgements.

For me, all of this conjures up an image of the television game show in which an advancing wall has a human shape in it. The contestants must fit through this by adapting their stance or posture; otherwise the wall will push them into the water hazard behind them. In the assessment context, each candidate with their own unique learning ‘shape’ careers towards the wall of the national examinations. Some smoothly fit the mould of contrived standardisation and some clamber through the wall with individualised ‘shapes’ expressed in their portfolios. However, some will simply not fit and will bounce off into the hazard of poor outcomes or even failure.

So will more ‘learning shapes’ be accommodated in the future? A recent ESRC/EPSRC Technology Enhanced Learning report highlighted our tendency to assess what is easy to assess, ‘whether someone can follow the rules’, as the authors put it. Tongue in cheek they argue: ‘For far too long we have all been like the drunk looking for his five-pound note under the lamp post – he knows that this is not where he dropped it, but there is no light to look anywhere else!’ (p. 5). Highlighting new artificial intelligence techniques in particular, they argue that new forms of assessment can better attribute meaning to what students have achieved. I am convinced that new technologies do offer ways forward and not just in terms of AI. But we will need to take a quite radical approach to the long-standing inertia that grips assessment of learning in non-high stakes contexts. In this, I am not so convinced.

John Gardner is Deputy Principal at the University of Stirling, visiting professor at the University of Oxford Centre for Educational Assessment and former President of the British Educational Research Association.

References: 
  1. System Upgrade – Realising the Vision for UK education (2012). A report from the ESRC/EPSRC Technology Enhanced Learning Research Programme. Director: Richard Noss, London Knowledge Lab. Accessed 23 July 2012.

Share this page