Posts Tagged testing
I have this foundational belief that technology can make the job of teaching easier in ways we can’t imagine. It’s not such a strange thought. This laptop I’m typing on has pushed the job of “publishing my writing to a disparate audience of readers from my couch” from laughably impossible to laughably easy.
So why shouldn’t new developments in edtech change the job of, say, teaching English from “rewarding but necessitating a martyr complex” to simply “rewarding”?
It will happen. Here’s how.
The William and Flora Hewlett Foundation is holding a contest. (Doesn’t this feel Willie-Wonka-dramatic?) They’ll be awarding $100K to “the designers of software that can reliably automate the grading of essays for state tests,” according to their press release, which I read about in EdSurge. There will be a demonstration from a bunch of vendors who already make this kind of software to see how good it is, then they’ll open it up to the public to develop the best essay-grading software and win the prize money.
It’s a joint project with Tom Vander Ark’s Open Education Solutions. I like what he writes about it here.
Basically, our data-hungry, standards-driven attitude has pushed us to embrace inadequate multiple choice tests just because they give us easy data. I’m guilty of this at my current school. We have a professional learning team (PLT) of 9th grade English teachers, and we work very hard to develop a common assessment so we can review data together. But it’s a Scantron, and even though I’m to blame since I had input into the creation of the test, I don’t think it’s a great assessment. This year we added a writing component, so I feel better about the semester final we just gave, but it was hard to score those before grades were stored at noon yesterday.
From the press release:
“Better tests support better learning,” says Barbara Chow, Education Program Director at the Hewlett Foundation. “Rapid and accurate automated essay scoring will encourage states to include more writing in their state assessments. And the more we can use essays to assess what students have learned, the greater the likelihood they’ll master important academic content, critical thinking, and effective communication.”
While a “greater likelihood” might not sound like a sure thing, that’s quite a payoff, given what business we’re in.
I know English teachers will scoff at the idea of essay-grading software for a long time. How could a program possibly assess the subtleties of argument the way I can? How can it assess a writer’s voice or style or depth of analytical insight the way I can? How can it see how much a particular student has grown as a writer the way I can?
And while I have no idea how the magicians who write code will do it, I know they will. Of course they will! Especially if there are more incentives in the marketplace like the kind this competition is creating. Look at Watson. Look at Google. I remember when those things would have sounded impossible, and I’m not even very old.
Speaking of Watson, that reminds me of Ken Jennings’s brilliant take on Kent Brockman:
As an English teacher, I also welcome our new computer overlords. If they free up some time for me so I can work on an exciting new lesson (or spend time with my family) instead of grading papers, great! If they allow standardized tests to evolve into the kinds of assessments that we wouldn’t feel horrible about preparing students to take, great! Welcome to Earth!