Tag Archives: writing

Education Evolutions Newsletter #13

It was a little harder to find decidedly more positive pieces for this week, as some of that is a bit in the beholder. Hopefully, this selection does not require a dark soundtrack, perhaps a bit more like jazz.

Education Evolutions:

Select Readings on Teaching and Learning in the Digital Age

Here are four curated articles about education, technology, and evolutions in teaching.

  • When Finnish Teachers Work in America’s Public Schools – The Atlantic – Timothy D. Walker  (11 minute read)
    Walker is a Massachusetts native living and working in Finland as a teacher. In this piece, he characterizes three teachers from Finland now working in American schools and documents their experiences. Considering how Finland is widely considered the best school system in the international scoring tables, it is interesting to see their first-hand difficulties with the way our American system is structured. Also interesting is the inclusion of long-time standards advocate Marc Tucker who writes a regular column for EdWeek. While he is considered an expert in education policies and practices from abroad, Tucker’s warning at the end of the article seems rather alarmingly dramatic.

  • Why Identity and Emotion are Central To Motivating the Teen BrainKQED’s MindShiftEmmeline Zhao  (7 minute read)
    While there might not be anything truly revolutionary in this article, it does a nice job of consolidating a lot of emerging understanding about the adolescent brain. Perhaps its primary value is in a kind of reframing that enables to see certain kinds of challenges as genuine opportunities. It certainly provides soft support for the notion of students driving a lot of their own learning through setting their own goals involving their own interests, something many high schools have a difficult time embracing institutionally. There is increasingly little doubt that it is a profoundly romantic period in life, in the purest sense.

  • This Is Not An EssayModern LearnersLee Skallerup Bessette (11 minute read)
    I have a hunch that I read this once upon a time, since it was written in 2014, although it resurfaced recently as it might as well be required reading. I wish I had written this piece myself for so many reasons. Skallerup Bessette gets right to the heart of a dark disservice that we do to students far too often. Rigid, narrow demands and negative reinforcement are just part of a constellation of associations with writing for students and yet more than ever before they are “writing.” It might not be what teachers want or like but, as Skallerup Bessette observes, “They learn, they teach, they offer their own feedback, they fail, and they try again. And we often actively work in schools to devalue, undermine, and even try to get students to unlearn these skills.” We can meet students where they are or force them to meet us where we are. I know which one I would choose.

  • It Turns Out Spending More Probably Does Improve EducationThe New York Times – Kevin Carey and Elizabeth A. Harris  (8 minute read)
    There is an element of this article that strikes a kind of cynicism, a well-who-doesn’t-know that kind of response. Yet the research profiled in this piece provides the kind of substantive data as evidence for the claim. Surprisingly, or maybe not so much, there has been a lot less hard evidence in support of this than we might realize. Of course, the researchers are still using tests as a metric because schools are all about testing, right? Still, what research like this does is support the eye-test, what we see all around us, which can at times be the best kind of research and use of data. Not surprisingly, the requisite charter supporter questions the findings and seems almost dismissive. It frustrates me to no end how often journalists, in an attempt to be “balanced” include just anyone with an opposing view regardless of whether they have any warrants for their views or not.

Advertisements

Reflections on Unit 5: Assessment, Rubrics, and Portfolios


flickr photo shared by ccarlstead under a Creative Commons ( BY ) license

Note: This post is an extended reflection from the EdTech Team’s Teacher Leader Certification Program. I am participating in the initial cohort.

General Thoughts on Assessment and Writing as Assessment

When it comes to assessment, I have to admit that a lot of my thinking is heavily influenced by my background as an English teacher and specifically involves writing instruction. A major tenant I subscribed to when teaching writing was the idea that I did not teach writing as much as I taught young writers.

Consequently, this has always made me highly suspicious of formulaic approaches to assessment and rarely interested in low-hanging fruit, like multiple choice. Writing has always seemed about as authentic an assessment as it gets. Plus, I am not terribly interested in formulaic writers.

Still, show me what a student writes and I can see what he thinks, more or less. Plus, I am one of those that keeps advocating that we expand our notion of what constitutes a text, which means there is a whole mess of possibilities when I use the term writing. I am definitely a fan of Brian Kennedy’s idea that everything is an image, including text. Therefore, School needs to be a place where students learn how to communicate through an array of forms, genres, and purposes. The more practice the better.

Authenticity in Assessment

There are different kinds of authenticity when it comes to assessment. There is the nature and purpose of the task being used to assess but there is also the actual assessment that a student receives after having completed the task, be it feedback or more. What kind of information that students receive interests quite a bit.

When it came to assessment, the best way I learned how to help and teach young writers was always through interventions that were early and often with a gradual release. I think we teachers often underestimate how hard it can be for students just to get started at anything. So a lot of my approach involves helping more in the earliest stages, showing students a few possible paths, and then encouraging them to pick one and see what happens. So much of that approach ends up being far more formative than summative.

I like to say that I spent over ten years of my teaching career trying to make grades as meaningless as possible, which meant I spent a lot more time giving feedback and a whole lot less time giving grades. I even eschewed putting grades on papers altogether at times. This was not always the most popular approach, but there are a lot of benefits.

One of the main aims for more feedback and fewer grades was to begin giving students the tools to strengthen their ability to self-assess their own work. I used to say often, “In the long run, self-assessment is the only kind that really matters all that much.” Still do. That does not always play well with high schoolers but I still believe it. There are plenty of students that get it, too.

They may not be completely autonomous at this moment in their lives but they know that it is coming. They are in the midst of major transition and are starting to get a sense of where their true strengths and weaknesses are. As educators, we need to help them identify and play to their strengths. They may need to work on weaknesses but their strengths will take them far further than the work on their weaknesses. Plus, strong, honest self-assessment can get awfully authentic.


flickr photo shared by tengrrl under a Creative Commons ( BY-SA ) license

Alfie Kohn and Rubrics

When I was getting certified and taking courses, I distinctly remember thinking that Alfie Kohn was ridiculous. The little I read of his work at that time I felt was completely Pollyanna nonsense.

How little I actually understood about anything.

One of my most fascinating transformations over my teaching career involves my response to Alfie Kohn. Having taught for over ten years, I now think just about everything he writes is as sharply focused and accurate as possible. The longer I have been a teacher the more I have sided with him on just about everything.

I had not seen “Why the Best Teachers Don’t Give Tests” prior to this class but could not agree more with the argument he is making. In fact, this article articulated a host of things that I have believed and tried argued to little or no avail for quite a few years. He communicated them far better than I did no doubt, but I am amazed at how many people passively ignore most of these sentiments.

My favorite among them is his section on rubrics.

I must confess, I had no idea what a rubric was until I began education school as an adult. As I have grown to understand them more deeply, I have come to the conclusion that the rubric is this era’s grading curve.

When I was a kid, students were routinely graded on a curve under a completely misguided application of a tool that works in one context but not another. Of course, when looking at a test performance of say 30,000 students, a bell curve is a very likely distribution of scores. However, in a sample size of 30 or less in a classroom, it is tantamount to malpractice.

Similarly, rubrics are tools to unify scoring across multiple assessors on a standardized, normed test. Again, it is the preferred tool for hundreds of scorers looking at the work of 30,000 students. Interestingly and somewhat ironic, the rubric scores of multiple assessors for those thousands of students would likely fall into a bell curve.

In most K12 classrooms, a single teacher is grading 30 or fewer students in a far from normed context. Using rubrics in this way is a misuse of the tool. It is an application that does not correspond with its purpose or function. Still, it has not stopped their proliferation.

It makes sense to use them, in preparation for the kinds of tests where they are used, like a practice state assessment or AP test. In that way, students can approach that kind of writing almost as a genre task. Yet to use them to assess writing in a class of 20 students is often an invitation for producing formulaic, “standardized writers,” as Kohn quotes highlights when quoting Maja Wilson.

This does not mean that I dismiss rubrics altogether. In truth, the best thing about rubrics, especially in the more common misused classroom context, involves the process of making them, either alone or with students. Creating a rubric from scratch can be an excellent way to focus on what the most important elements are in a given task. However, that process need not necessarily render a rubric, as they are typically known. Instead, a kind of grading checklist can more than suffice and be useful for teachers and students. It clearly tells the students, “This is what must be included in the work.”

The next part of making a rubric, the descent into the categorization of levels for accountability purposes with scores and such often degenerates into arbitrary parsing and superficial cover for standardized subjectivity. On that level, rubrics become another tool for ranking and sorting students, which is something I have always had very little interest in doing as a teacher.

Digital Portfolios

I began using portfolios within my first year of teaching and never stopped. They can be challenging to manage as a teacher. However, there is no better way to get a sense of what a student knows and can do then by using a portfolio.

When I migrated writing portfolios from analog to digital, the biggest challenge had to do with the drafting and iterative process.

Google Docs draft history is not an entirely accurate representation of familiar analog draft versions. This is different when trying to get a snapshot at a particular moment since Google archives essentially on the fly. So it is harder to lock down a particular version, at a given moment, as a window into how a piece has evolved.

Digital writing  can always be open to modification, which is great, in some ways. Still, getting a sense of a document’s evolution becomes considerably more fluid. Preserving iterations at specific points can be done, of course, but it needs to be planned and adjustments need to be made.


flickr photo shared by HereIsTom under a Creative Commons ( BY-NC-ND ) license

Technology Benefits for Assessment

When it comes to using technology in assessment, I want to believe that it is more beneficial but I have my reservations. We are on the threshold of a major turning point in assessment. In the United States, the rush is on to make all major standardized assessments computer-based. So for better or worse, technology-based assessment will become increasingly common.

One of the benefits is the speed that feedback can be delivered, which satisfies an instant gratification itch. Ironically, the big standardized tests have yet to be able to deliver on the promise of faster results in any meaningful way.

There are definitely applications of technology-based assessment that can be effective, particularly on the formative front. When done well, the ability for a teacher to quickly capture some basic data in a fast, external, and retrievable way can really help inform instruction.

Yet, the problem for me is that technology-based assessment privileges certain kinds of assessment over others, making them far more likely to be commonly used. For example, technology has made multiple choice items easier than ever to create, deliver, and score. The trouble is multiple choice items are not a terribly good way to assess students. We sacrifice assessment quality for expediency. This not so much a technology-based impulse as it is a market-based one.

Recently, Valerie Strauss in the Washington Post‘s “Should you trust a computer to grade your child’s writing on Common Core tests?” reignited the controversy of computer-based scoring of writing. I have already mentioned that I believe writing to be one of the better forms of assessment. Moreover, I find the notion of computer-based scoring fundamentally flawed.

What truly is communicated to students when educators, at any level, ask them to write something that will not even be read by a human?

I have written about this before and likely will again but that is for another post.

Reading & Reacting: Study Examines Cost Savings Through ‘Machine Scoring’ of Tests


cc licensed ( BY NC ) flickr photo shared by cobalt123

By Sean Cavanaugh @ EdWeek’s Marketplace K-12 blog

This recent blogpost about the potential savings of machine scoring writing tests in EdWeek’s Marketplace K-12 blog was another in a absurd line of thinking. While Cavanaugh is really only reporting here, I just keep wondering how this rates worthy enough to even be addressed.

There is no question that cost is always an issue in education. Yet savings is not a bottom line issue as it often is in business, nor is it a always a value proposition.

The real issue is far more problematic. What exactly is the message to students when educators say something akin to “Your writing is so unimportant that it is cheaper and easier to have a machine score it”?

To the best of my knowledge humans have never endeavored to write prose with the intended audience being a machine. What would be the purpose of doing so even? To pass a test of dubious validity anyway?

Somewhere along the line, we have lost the plot even thinking machine scoring of student writing is a valid or even good idea from the beginning.

Of course there is no irony that the organizations requesting this kind of information are all involved in student assessment in some way.

An even more blackly comic notion is that machine scoring of student writing can be done a 20-50% of the cost of humans. For how long exactly? The first time maybe, but exactly how long will it take before ever “better” technology will be used at an even greater cost and, incidentally, steeper profit?

All the while the students are the ones losing as the demands for data and meaningless scores on even more meaningless tests of writing ability drive teachers to coach students to write for a machine, rather than endeavor to communicate with greater sophistication and clarity in the hopes of being understood by another human being, which is kind of the point of writing anything anyway.

What is really saved and what are the true costs with machine scored tests for writing? It is all a bit absurd, really.