Measuring The Unmeasurable

27 September 2011

I’m a tidy-minded sort of person. I like to see things sorted. Measured. I don’t like vagueness and loose ends.

So I welcome one of the major changes to affect schools and schoolteachers in recent times: the renewed emphasis on the measuring and tracking of pupils’ progress. Not that this approach ends with the pupils: teachers, too, must be ‘performance managed’. And then, from time to time, the whole school gets assessed and categorised according to its results. The inspectors come in, scrutinise everything and everybody, and give you the equivalent of a mark out of ten.

school-recordsHaving been in contact with schools for most of my life—first as a pupil, then as a teacher and most recently as a school governor—I’m happy to see this insistence on standards. It’s a welcome development from the days when too many teachers breezed through each day with barely a hint of planned lessons, and where the mark they gave a pupil at the end of the year wasn’t far from ‘think of a number’.

But there is a danger in the new passion for standards and record-keeping. It is to do with the fact that some things are more measurable than others. Give little Fiona a reading test on a set text. Count the number of mistakes she makes and deduct that from twenty, or whatever, and you have a meaningful mark. You can say with certainty whether she did better than Richard or Shania. But other areas can’t be so neatly assessed.

Richard, for instance, has a younger brother smitten with leukaemia. The whole family has been disrupted for well over a year by the upset: one parent or the other spending nights at the hospital with the sick child; the worry about whether he will come through and live to see another Christmas; related financial issues; and the focus on the sick child that inevitably sometimes robs Richard of attention.

Everyone agrees that Richard’s progress in school has been adversely affected by all this. But how do you take that into account when it comes to school tests and record-keeping? Yes, Richard would have done better with his maths and reading if he hadn’t had to cope with the trouble at home. So do you add a few percent to his actual score to take account of that? If so, how many percent? Probably you shouldn’t, say the purists; you must record only measurable achievement.

So what about the child whose parents are going through a protracted and acrimonious divorce? Home tensions are high. Nerves are jangling. What allowances, if any, do you make for that child when doing the measuring? And what about the boy whose dad, in his forties, collapsed and died of a stroke at work last month, causing the poor lad’s concentration-levels to plummet? How do you fit him into the measuring system?

Teachers, too, have their ups and downs, and there’s no less of a problem in knowing how to take account of hard-to-measure factors in assessing their progress. This year the Head of Geography is happy because she has seen 78% of pupils manage a Grade A, B or C. But after the summer break the pressure is on for her to raise the percentage next year. Indeed, her incremental pay rise may depend on it.

If the new intake proves to be a ‘good cohort’, that is, one with a high average IQ and mostly from stable homes, she can expect to improve on last year’s figures. But what if it’s a ‘bad cohort’? What if there is a high percentage of not-too-bright pupils, many of whom would never make a Grade A, B or C if they stayed in school till they were twenty? And what if, for some reason, a majority of them happen to come this year from dysfunctional families, with attendant emotional and learning difficulties?

The Head of Geography does her very best. She puts extra hours in. She maybe gives after-school tuition to the most needy students. She motivates her departmental staff as best she can. Everybody works their socks off all year but, in the end, the pass-rate at A, B or C turns out at only 61%.

The members of the Performance Management team frown at the figures before them as the Head of Geography enters the room for her annual assessment. ‘What went wrong?’ they ask. ‘This drop in standards can’t be allowed to pass without censure.’

What is the poor staff member to say? Should she tell it like it is? ‘Well, I’m afraid they were a pretty dim bunch this year—the dimmest for years, in fact. And more than half of them are from seriously screwed-up home situations. All of us in the department have worked our very hardest with them, but you can’t expect us to make a work of art out of duff materials.’

Personally, I like that. It’s honest, and it’s probably a fair assessment. Of course, the teacher would use less bald vocabulary and wrap it all up in bland education-speak. But whether you call a spade a spade or an agricultural implement doesn’t alter the basic situation. And if I were on the committee I’d be voting for the Head of Geography to be granted her incremental rise, even though the measurable results fell short of the goals set for the year. After all, this year she and her staff probably worked harder than ever precisely because of the tough materials they had to work with.

The moral of this tale, I suppose, is that while we do well to measure the measurable, common-sense dictates, at the same time, that some key factors in the educational process cannot be measured with precision. And it’s here that a little human warmth and understanding needs injecting into school stats and performance management meetings to oil the cogs in the school machine.

Without it, the system’s in for a seizure.

%d bloggers like this: