Rubrics warp teaching and assessment
Dec 11, 2015
Men are more apt to be mistaken in their generalizations than in their particular observations.
Machiavelli
Rosen points out that schools teach children to write for exams and that writing for exams is not the same thing as writing well. This is, of course, true; we teach what's assessed and if the wrong things are assessed if follows that the wrong things are likely to be taught.
If we use mark schemes to work out what or how to teach, we end up with cargo cult teaching and learning. The answer begins with modelling:
Good modelling requires that we share not just the content of a piece of writing but the thinking which underpins it. A few years ago I decided to take some tennis lessons to improve me game after realising I was never going to improve by watching Wimbledon every year. The coach didn't just show me how to play, he told me how to think. We don't learn well from watching experts perform, we need to have their performance broken down and analysed. Although I could replicate the movements required to return a serve, I had no idea what I was doing until my coach taught me to watch my opponents' shoulders instead of the ball. I'm still not much good at tennis because I don't practice enough, but I'm a lot better at watching Wimbledon because I have an idea about how a tennis player thinks.When students see expertise in action they can mimic. With scaffolding and plenty of practice they become increasingly expert.
Deconstructing exemplars can be very useful, but possibly the most effective way to share both thinking and outcome is to write a live model in front of a class and speak your thoughts aloud as you go.
Once we've got students to do some writing, how best should we assess it? Typically, we rely on rubrics. Rubrics only contain the superficial, the easily described and the obvious. This is probably why Rosen is affronted by the horrible soup of fronted adverbials, embedded relative clauses, and noun phrases which permeate teachers' understanding of what constitutes good writing. But when we actually judge 'real' writing we don't tend to apply any kind of rubric, we either like it or we don't.
To remedy this, Rosen suggests writing ought to be assessed on a 3 point scale: very good, good and, not so good. This 3 point scale should, he reckons, be applied to four areas: first impressions, surprise, sustaining interest and the transformation of source material. These seem pretty reasonable criteria on which to judge writing, and, as an attempt to move away from limiting assessment rubrics it is laudable.
But Rosen's rubric will be as limiting as any other. Using mark schemes to assess students' work narrows the validity of what constitutes good writing to what's on the mark scheme. If it's on the rubric, we credit it, if it's not, we don't.
[caption width="1024" id="attachment_8743" align="aligncenter"]
From Greg Ashman's blog: https://gregashman.wordpress.com/2015/11/16/some-comments-on-explanations-in-maths/[/caption]Expert performance depends on huge amounts of tacit knowledge. Because it's tacit it's very hard to articulate - even (maybe, especially) for experts. As Michael Polanyi said, "So long as we use a certain language, all questions that we can ask will have to be formulated in it and will thereby confirm the theory of the universe which is implied in the vocabulary and structure of the language."
The Learning Spy Substack is a sharp, provocative dispatch from the front lines of education, where ideas are tested, myths are challenged, and nothing is taken for granted.
Join me on Substack