Tag Archives: Readings & Reactions

Reading & Reacting: Why Americans Can’t Write

By Natalie Wexler @ The Washington Post

One of my instructional heroes is the late Francis Christiansen, who controversially stated in 1963 that “We do not really teach our captive charges — we merely expect them to,” with regards to teaching composition.

While I humbly submit that those words remain just as true today, I think Wexler’s article misses the mark on a number of counts despite including a couple of salient points. Moreover, while the essential problem remains a challenge, the nature of how we go about teaching student writers and how we assess student writing has changed considerably and not necessarily to the advantage of developing its instruction.

Most importantly, the whole point of view of the article is flawed, as the adage suggests, “You don’t teach writing. You teach student writers.”

If I work sequentially through the article here are some reactions.

First, using NAEP as metric for evidence is extremely problematic. As an assessment it is deeply suspect for more reasons than are warranted in this response. Still, when you open with that as grounds, anyone with a deeper knowledge of education should at the least raise an eyebrow.

Second, Common Core’s attempts to spread writing across the curriculum is not the first attempt to do such a thing, not by a long shot. Additionally, there is nothing to suggest that it will be any more successful than previous attempts. In fact, there is no evidence yet that this attempt is even being executed. Early evidence suggests that the Common Core unnecessarily, and foolishly in my opinion, reduces the number of writing genres students are likely to be asked to write, which may harm more than help improvement of instruction.

Just because something is mandated has never meant it will work.

What’s more, ill-conceived edreform attempts at standardization have increasingly led to more standardized and formulaic writing, which is not generally associated with quality on any level. As has been pointed out, failing to even entertain how high stakes standardized testing has impacted instruction, especially when instructing student writers, is a genuine dereliction.

I would support the claim that many teachers are not specifically trained at teaching writing skill, with some caveats. My anecdotal experience would support that contention but only on the broadest of levels. I cannot count the number of teachers outside of an English department that have actually said aloud, “I don’t teach writing. I am just looking for content.”

However, upon some simple questioning, most of these same teachers reveal a lack of confidence as writers themselves or feel ill-equipped teaching writing skill. Quite a few have even even express a desire to improve, but have a number of other pressures to address too. These phenomenon can be revealed by English teachers too. Unfortunately, there can be a lot more expectation than instruction across levels and specialties.

Still, having spent the last decade associated with the National Writing Project, I have witnessed no shortage of teachers eager to learn how to teach student writers better. While a significant proportion of teachers craving this kind of professional development tend to be English and Language Arts teachers, an ongoing effort is made to appeal to all kinds of teachers no matter the area of expertise. It is the only literacy based Kindergarten through University professional development organization I know and being involved is well worth the time and effort.

I also support the Wexler’s implication that all student writers need to begin learning on the sentence level, first and foremost. However, she does not quite commit, Plus, she follows that point up with comments about five paragraph essays. Even the slightest suggestion that a five paragraph essay is in any way connected to good writing, either as a model, framework, or product is woefully misguided, in my opinion.

I have routinely challenged students to find a five paragraph essay “in the wild” that is not written by another student. No one has ever found one. Now, there is a scant chance that one exists, but methinks it is rarer than condor outside of captivity.

Strangely, even Wexler’s inclusion of Common Core’s demand for “extended writing” negates the premise that five paragraph essays have any purchase in the discussion whatsoever.

Again, high stakes standardized testing can be called into question here too, and Wexler ducks this completely. Writing, especially essay-like writing, is almost always used as a primary assessment of learning on standardized tests. That makes sense in that writing is the coin of the academic realm.

Yet, using writing only as an assessment, of course, trickles down to classroom practice, where students continually face writing as a form of some kind of assessment. Is there any wonder why students would begin to loathe it?

I would even go further and submit that a lot of what student writers are asked to write, in the form of essay assessments of their learning, asks them to ape a level of authority that they neither have nor are likely to successfully imitate. Subsequently, they are then evaluated and criticized for what they have written, often increasing the negative impact.

Finally, Wexler’s proposition “learning to write clearly requires learning to think clearly,” reads nicely on the surface, but it too is terribly simplistic. Often writing is the tool of getting to clear thinking, not simply the product of it. As EM Forster beautifully captured, “How do I know what I think until I see what I say?”

Sadly, there is no standardized test much interested in that kind of writing — writing as thinking instead of writing from thinking. Plus, all this reading, writing, and thinking business is awfully murky, recursive, reciprocal, and terribly difficult to parse. It is not always the easiest thing to measure well in a standardized way.

Now that I may have written more than Wexler on the topic, that seems a good enough place to stop.

Readings & Reactions: To Diane Ravitch and Anthony Cody – Really?

Image: EdWeek's Top Performers blog banner

Photo: Marc S. Tucker   Photo: Anthony Cody

By Marc Tucker @ EdWeek’s Top Performers blog

This recent blogpost where Marc Tucker rebuts Anthony Cody’s previous criticisms of education’s impact on the economy is a fascinating window into two very different points of view that more likely talking past one another rather than to one another.

While I certainly cannot speak for Mr. Cody, I would point to a small but significant distinction between the point I think he was making and the point that Tucker is countering.

It seems to me that in Tucker’s rebuttal is making education and schooling synonymous, which is common. However, as one part of a wider discussion, which seems to be Mr. Cody’s major endeavor both in his former EdWeek column and beyond, is that education and schooling are not necessarily as synonymous as sometimes believed.

Of course, it is foolish to argue against many of the facts that Tucker offers about income rates generally being higher for those that complete more schooling, but a much stronger argument could be made that the individuals that complete the various scholastic benchmarks cited begin with an array of advantages that might otherwise enhance their income. This point gets no mention in the column.

I would also add that “higher levels of knowledge, skills and technology,” may be a product of higher levels of education, but is not a guarantee. Ideally, this is true. Yet again, education and schooling are not necessarily the same. The educational system, made up of schools, is not the only source of education, nor should it ever be. Employee training programs can also be a form of education that can enhance income considerably, when done well, and that is only one additional source.

However, many companies cut training and development opportunities to increase their bottom line and satisfy shareholders, while blaming the decline of the educational system for its inability to produce qualified workers.

This raises the spectre of another wider debate about the purpose of an education, and how much of that purpose be strictly vocational, but that easily exceeds the boundaries of one column. Still, education may be the result of schooling, training, apprenticeship, and far more opportunities and alternatives that exist beyond what is considered the traditional educational system.

To suggest that there are not places where the existing educational system can be improved is folly, but admitting that also does not require the admission that the system is failing. Plus, comparing our students to other nations’ students is also not without serious flaws, again far more than would fit in a single column.

It seems to me that Mr. Tucker and Mr. Cody might very well be writing past one another, using common vocabulary but meaning very different things.

Reading & Reacting: Study Examines Cost Savings Through ‘Machine Scoring’ of Tests


cc licensed ( BY NC ) flickr photo shared by cobalt123

By Sean Cavanaugh @ EdWeek’s Marketplace K-12 blog

This recent blogpost about the potential savings of machine scoring writing tests in EdWeek’s Marketplace K-12 blog was another in a absurd line of thinking. While Cavanaugh is really only reporting here, I just keep wondering how this rates worthy enough to even be addressed.

There is no question that cost is always an issue in education. Yet savings is not a bottom line issue as it often is in business, nor is it a always a value proposition.

The real issue is far more problematic. What exactly is the message to students when educators say something akin to “Your writing is so unimportant that it is cheaper and easier to have a machine score it”?

To the best of my knowledge humans have never endeavored to write prose with the intended audience being a machine. What would be the purpose of doing so even? To pass a test of dubious validity anyway?

Somewhere along the line, we have lost the plot even thinking machine scoring of student writing is a valid or even good idea from the beginning.

Of course there is no irony that the organizations requesting this kind of information are all involved in student assessment in some way.

An even more blackly comic notion is that machine scoring of student writing can be done a 20-50% of the cost of humans. For how long exactly? The first time maybe, but exactly how long will it take before ever “better” technology will be used at an even greater cost and, incidentally, steeper profit?

All the while the students are the ones losing as the demands for data and meaningless scores on even more meaningless tests of writing ability drive teachers to coach students to write for a machine, rather than endeavor to communicate with greater sophistication and clarity in the hopes of being understood by another human being, which is kind of the point of writing anything anyway.

What is really saved and what are the true costs with machine scored tests for writing? It is all a bit absurd, really.