By Sean Cavanaugh @ EdWeek’s Marketplace K-12 blog
This recent blogpost about the potential savings of machine scoring writing tests in EdWeek’s Marketplace K-12 blog was another in a absurd line of thinking. While Cavanaugh is really only reporting here, I just keep wondering how this rates worthy enough to even be addressed.
There is no question that cost is always an issue in education. Yet savings is not a bottom line issue as it often is in business, nor is it a always a value proposition.
The real issue is far more problematic. What exactly is the message to students when educators say something akin to “Your writing is so unimportant that it is cheaper and easier to have a machine score it”?
To the best of my knowledge humans have never endeavored to write prose with the intended audience being a machine. What would be the purpose of doing so even? To pass a test of dubious validity anyway?
Somewhere along the line, we have lost the plot even thinking machine scoring of student writing is a valid or even good idea from the beginning.
Of course there is no irony that the organizations requesting this kind of information are all involved in student assessment in some way.
An even more blackly comic notion is that machine scoring of student writing can be done a 20-50% of the cost of humans. For how long exactly? The first time maybe, but exactly how long will it take before ever “better” technology will be used at an even greater cost and, incidentally, steeper profit?
All the while the students are the ones losing as the demands for data and meaningless scores on even more meaningless tests of writing ability drive teachers to coach students to write for a machine, rather than endeavor to communicate with greater sophistication and clarity in the hopes of being understood by another human being, which is kind of the point of writing anything anyway.
What is really saved and what are the true costs with machine scored tests for writing? It is all a bit absurd, really.