Tag Archives: assessment

Reflections on Unit 5: Assessment, Rubrics, and Portfolios

flickr photo shared by ccarlstead under a Creative Commons ( BY ) license

Note: This post is an extended reflection from the EdTech Team’s Teacher Leader Certification Program. I am participating in the initial cohort.

General Thoughts on Assessment and Writing as Assessment

When it comes to assessment, I have to admit that a lot of my thinking is heavily influenced by my background as an English teacher and specifically involves writing instruction. A major tenant I subscribed to when teaching writing was the idea that I did not teach writing as much as I taught young writers.

Consequently, this has always made me highly suspicious of formulaic approaches to assessment and rarely interested in low-hanging fruit, like multiple choice. Writing has always seemed about as authentic an assessment as it gets. Plus, I am not terribly interested in formulaic writers.

Still, show me what a student writes and I can see what he thinks, more or less. Plus, I am one of those that keeps advocating that we expand our notion of what constitutes a text, which means there is a whole mess of possibilities when I use the term writing. I am definitely a fan of Brian Kennedy’s idea that everything is an image, including text. Therefore, School needs to be a place where students learn how to communicate through an array of forms, genres, and purposes. The more practice the better.

Authenticity in Assessment

There are different kinds of authenticity when it comes to assessment. There is the nature and purpose of the task being used to assess but there is also the actual assessment that a student receives after having completed the task, be it feedback or more. What kind of information that students receive interests quite a bit.

When it came to assessment, the best way I learned how to help and teach young writers was always through interventions that were early and often with a gradual release. I think we teachers often underestimate how hard it can be for students just to get started at anything. So a lot of my approach involves helping more in the earliest stages, showing students a few possible paths, and then encouraging them to pick one and see what happens. So much of that approach ends up being far more formative than summative.

I like to say that I spent over ten years of my teaching career trying to make grades as meaningless as possible, which meant I spent a lot more time giving feedback and a whole lot less time giving grades. I even eschewed putting grades on papers altogether at times. This was not always the most popular approach, but there are a lot of benefits.

One of the main aims for more feedback and fewer grades was to begin giving students the tools to strengthen their ability to self-assess their own work. I used to say often, “In the long run, self-assessment is the only kind that really matters all that much.” Still do. That does not always play well with high schoolers but I still believe it. There are plenty of students that get it, too.

They may not be completely autonomous at this moment in their lives but they know that it is coming. They are in the midst of major transition and are starting to get a sense of where their true strengths and weaknesses are. As educators, we need to help them identify and play to their strengths. They may need to work on weaknesses but their strengths will take them far further than the work on their weaknesses. Plus, strong, honest self-assessment can get awfully authentic.

flickr photo shared by tengrrl under a Creative Commons ( BY-SA ) license

Alfie Kohn and Rubrics

When I was getting certified and taking courses, I distinctly remember thinking that Alfie Kohn was ridiculous. The little I read of his work at that time I felt was completely Pollyanna nonsense.

How little I actually understood about anything.

One of my most fascinating transformations over my teaching career involves my response to Alfie Kohn. Having taught for over ten years, I now think just about everything he writes is as sharply focused and accurate as possible. The longer I have been a teacher the more I have sided with him on just about everything.

I had not seen “Why the Best Teachers Don’t Give Tests” prior to this class but could not agree more with the argument he is making. In fact, this article articulated a host of things that I have believed and tried argued to little or no avail for quite a few years. He communicated them far better than I did no doubt, but I am amazed at how many people passively ignore most of these sentiments.

My favorite among them is his section on rubrics.

I must confess, I had no idea what a rubric was until I began education school as an adult. As I have grown to understand them more deeply, I have come to the conclusion that the rubric is this era’s grading curve.

When I was a kid, students were routinely graded on a curve under a completely misguided application of a tool that works in one context but not another. Of course, when looking at a test performance of say 30,000 students, a bell curve is a very likely distribution of scores. However, in a sample size of 30 or less in a classroom, it is tantamount to malpractice.

Similarly, rubrics are tools to unify scoring across multiple assessors on a standardized, normed test. Again, it is the preferred tool for hundreds of scorers looking at the work of 30,000 students. Interestingly and somewhat ironic, the rubric scores of multiple assessors for those thousands of students would likely fall into a bell curve.

In most K12 classrooms, a single teacher is grading 30 or fewer students in a far from normed context. Using rubrics in this way is a misuse of the tool. It is an application that does not correspond with its purpose or function. Still, it has not stopped their proliferation.

It makes sense to use them, in preparation for the kinds of tests where they are used, like a practice state assessment or AP test. In that way, students can approach that kind of writing almost as a genre task. Yet to use them to assess writing in a class of 20 students is often an invitation for producing formulaic, “standardized writers,” as Kohn quotes highlights when quoting Maja Wilson.

This does not mean that I dismiss rubrics altogether. In truth, the best thing about rubrics, especially in the more common misused classroom context, involves the process of making them, either alone or with students. Creating a rubric from scratch can be an excellent way to focus on what the most important elements are in a given task. However, that process need not necessarily render a rubric, as they are typically known. Instead, a kind of grading checklist can more than suffice and be useful for teachers and students. It clearly tells the students, “This is what must be included in the work.”

The next part of making a rubric, the descent into the categorization of levels for accountability purposes with scores and such often degenerates into arbitrary parsing and superficial cover for standardized subjectivity. On that level, rubrics become another tool for ranking and sorting students, which is something I have always had very little interest in doing as a teacher.

Digital Portfolios

I began using portfolios within my first year of teaching and never stopped. They can be challenging to manage as a teacher. However, there is no better way to get a sense of what a student knows and can do then by using a portfolio.

When I migrated writing portfolios from analog to digital, the biggest challenge had to do with the drafting and iterative process.

Google Docs draft history is not an entirely accurate representation of familiar analog draft versions. This is different when trying to get a snapshot at a particular moment since Google archives essentially on the fly. So it is harder to lock down a particular version, at a given moment, as a window into how a piece has evolved.

Digital writing  can always be open to modification, which is great, in some ways. Still, getting a sense of a document’s evolution becomes considerably more fluid. Preserving iterations at specific points can be done, of course, but it needs to be planned and adjustments need to be made.

flickr photo shared by HereIsTom under a Creative Commons ( BY-NC-ND ) license

Technology Benefits for Assessment

When it comes to using technology in assessment, I want to believe that it is more beneficial but I have my reservations. We are on the threshold of a major turning point in assessment. In the United States, the rush is on to make all major standardized assessments computer-based. So for better or worse, technology-based assessment will become increasingly common.

One of the benefits is the speed that feedback can be delivered, which satisfies an instant gratification itch. Ironically, the big standardized tests have yet to be able to deliver on the promise of faster results in any meaningful way.

There are definitely applications of technology-based assessment that can be effective, particularly on the formative front. When done well, the ability for a teacher to quickly capture some basic data in a fast, external, and retrievable way can really help inform instruction.

Yet, the problem for me is that technology-based assessment privileges certain kinds of assessment over others, making them far more likely to be commonly used. For example, technology has made multiple choice items easier than ever to create, deliver, and score. The trouble is multiple choice items are not a terribly good way to assess students. We sacrifice assessment quality for expediency. This not so much a technology-based impulse as it is a market-based one.

Recently, Valerie Strauss in the Washington Post‘s “Should you trust a computer to grade your child’s writing on Common Core tests?” reignited the controversy of computer-based scoring of writing. I have already mentioned that I believe writing to be one of the better forms of assessment. Moreover, I find the notion of computer-based scoring fundamentally flawed.

What truly is communicated to students when educators, at any level, ask them to write something that will not even be read by a human?

I have written about this before and likely will again but that is for another post.

Ruminations on Assessment as Learning

Photo: framed

framed – cc licensed ( BY NC ND ) flickr photo by eyemage

As I wrap up my Beyond Letter Grades experience, my last badge effort involves contemplating assessment as learning, which I must confess is a bit of a slippery subject. It overlaps so much with terms like assessment for learning and assessment of learning that it is pretty easy for them to start blending together. Honestly, I am not sure that I see enough difference between as and for to make a significant case for them being separated.

Modifying Portfolio Assessment

For years I have employed a writing portfolio as the single most important task of my classes. As I have changed schools, schedules, and students, it is one thing that has remained in place as part of my practice. In this sense it is less a lesson and more an assessment. However, it has remained a fairly foreign concept to most of my students and requires definite preparation, which takes the form of a series of short lessons. It is always a bit onerous to tackle in a single one.

On a superficial level I modify the portfolio requirements all the time depending on what the students have accomplished over the course of the semester. Unfortunately, the school where I now teach uses a semester-based system, which means that there is some minor potential turnover of students at the break every year.

Semester vs. Year

Consequently, I ask for a portfolio at the end of each semester, although I feel like the results were better when I have worked with a year-long schedule. With a year-long portfolio, there is a much longer developmental arc and the thread of learning can be more consistent over that time.

For me, as well as my observation of students, semesters tend to truncate the natural flow of the school year, compressing desired outcomes into even more tightly bound boxes, which may or not be reasonable for some students. By the time a high school student has adapted and begun making deep progress the semester is over and a new one begun. I have always felt that it takes most students about two-thirds to three-quarters of the year to be operating at their peak level. Shortly after that is the sweet spot, where I have always looked to get the best assessment of learning. Prior to that it is all about feedback loops and improvement.

Nevertheless, I use a semester portfolio, which includes a reflection on the selections and the process of creating them, which I wrote about for the self assessment module. Yet, I have always felt that this task needs more scaffolding to better reach students at a variety of different ages, levels, and abilities. This unit, in conjunction with a handful of others, got me thinking about how to do just that. I think the answer may be through a lens of assessment as learning, a series of scaffolded student experiences.

Adjusting the Assessment Lens

Photo: Lens (160/365)

Lens (160/365) – cc licensed ( BY SA ) flickr photo by Andy Rennie

In essence Beyond Letter Grades has already sparked this change. Building on the work from the self assessment badge, I will ask students to engage in a series of self assessments that will grow in depth and complexity.

Beginning with a closer self assessment of the main “summative” task in the narrative unit students are completing, students will get the first formal formative self assessment experience. While I explained this particular plan in greater detail, here is the quick summary. Students have two drafts of a long narrative they have composed, one completed before and one completed after a round with a peer response group. The amount of feedback each student receives varies, but all groups include three students.

Considering the limits of time and peer feedback the differences between the two drafts will be somewhat limited. This means that the changes are likely to be limited as well, and thus easier to identify and explain why they were made. Students also were given a rubric by which the narrative will be assessed to use as an additional reason for making changes. I will ask students to highlight the changes between the two drafts and explain what prompted the revisions and why they were made. Previously I was only contemplating this move. Now I am committed to it. This should be take about half a class session.

Additionally, within a couple of days of this first experience, I will present students with both a pre-test and post-test narrative assessment and ask them to identify the changes they can observe between the two pieces. This is a more complex task given the length of time between the two compositions and the number of potential technical areas growth. Also, there is no group feedback for this task. However, a rubric will again assist the identification of changes. Similarly, student will be asked to identify what has changed and improved, as well as what they believe the reasons are for the changes. My hope is that this experience will not require a full class session but it certainly could.

As I transition students to a more expository writing focus, I will repeat a similar comparative methodology. Having saved a brief expository sample from each student a couple of weeks ago, I will return it to students and ask them to assess their own sample using a specific criteria. I will then give them another copy of the sample assessed by me using the same criteria. Again, students will be asked to compare the pieces, identifying the differences. This task’s complexity will increase is by having students then rewrite the sample, as well as document why they made certain changes. This is probably a full class worth of work.

These three formative experiences should be preparatory for the kind of self assessment I am hoping to see when they assemble and submit their portfolio of revised pieces they have selected to best show their learning. I suspect that there may be a one two more experiences along the way that will assist, but I will have to wait and see what emerges from looking at student work through this new lens.

Additionally, I have to remain sensitive to the students needs and progress. While I want them to have a few reps of self assessment in hopes of building a deeper more reflective disposition, I do not want to fatigue them on the concept. If I cannot find ways to increase the complexity of the task or reflection I probably need not add another rep.

Concluding Thoughts

As I see it, the key to cultivating assessment as learning is framing activities in the course around different types of formative and summative feedback, being prepared to transform any summative assessment to a formative one when needed, and scaffolding self assessment in such a way that students gain a deeper capacity for reflecting on their own work and processes.

More than anything, reading the excerpt from Rethinking Classroom Assessment with Purpose in Mind and ruminating on how to apply some of the principles has lead me to believe assessment as learning may be more about creating a cultural disposition in class. One that both encourages and honors student’s monitoring, assessing, and ultimately evaluating there own performance. It has to become a habit of mind of regular practice for it to really be successfully realized.

The Only Assessment that Matters

Photo: Inverse #2

Inverse 2 – cc licensed ( BY NC ND ) flickr photo by Andy Houghton

A fellow National Writing Project colleague and friend Paul Allison and I were talking once upon a time, when he posed a question very close to this, “In the end, self-assessment is the only assessment that really matters isn’t it?” That may not be exactly what he said, but that is how I like to remember it. Plus, it certainly captures the spirit of the brief exchange. The sentiment resonated so strongly with me it has remained ever since.

We all must live with ourselves an awfully long time, more than anyone else certainly has to live with us. That’s for sure. It is not uncommon for me share comments like these and stress the importance of reflection and self-assessment with my students.

A Brief Anecdote on Student Self-Assessment

A few years ago, I received the most remarkable student self-assessment I have ever read, as part of an end-of-semester writing portfolio. Also, I have to admit being a little disconcerted when I saw myself quoted in a student paper, but this student simply gets it and gets it on a deeper level than I ever would have imagined. It also seemed to highlight a lot of the issues that have been shared and discussed in this MOOC. Here is an excerpt.

Through the course of the year, I have been writing down bits of conversations, words, and tips that I have heard in English class. Some are funny, some are weird, and some really stick with me. On October 28th, you said, “[Self-assessment] is really the only assessment that matters.” Is it? Through the course of the year, I grew more and more at home with this statement. If I know I am doing the best I can, then everything else is secondary. “Any time you’re focused on the grade, you are off target,” you said on February 14th [and has] always been a hard concept for me to wrap my head around. Through the year, though, these quotes bloomed into significant meaning. Whenever I write, like now for instance, it needs to just be the best I can do. My goal is to make my point and prove it in my writing, not simply to reach 600 words. This is a way that I have grown as both a student and a person, because as my mindset in school shifted, so did my outlook on the rest of my life.

Keep in mind this is from a former ninth grade student. It remains my favorite, most fascinating student self-assessment I have ever received. It broke all expectations. In fact, reading something like this, written by a student, makes a lot of the slogging through drafts as an English teacher, a whole lot less daunting.

My Latest Plan for a Self-Assessment

I am about to wrap a narrative writing unit with my ninth grade students, which I have already mined for examples for Beyond Letter Grades. Heavily influenced by George Hillocks’ Narrative Writing: Learning a New Model for Teaching, I have been using a lot of the methodology outlined in that title ever since reading it.

Beginning with a pre-test audit to be written in a one hour class, students were given the following prompt right from Hillocks: Write a story about an event that is important to you for some reason. Write about it in as much detail as you can so that someone reading it will be able to see what you saw and feel what you felt.

This week students will submit their anchor summative assignment, which they have had a couple of weeks to develop. Later in the week, they will take the post-test, another hour in class writing task, with the same pre-test prompt. In between, they have completed a handful of what I like to call rehearsal assignments, practicing specific narrative techniques listed in this rubric, also something I have adapted from Hillocks.

I have deliberately kept only a handful of broad categories to be assessed. Using this rubric, I already scored the pre-test, and will also use it to score the summative narrative task and the post-test.

Prior to assigning the summative narrative task, I issued and reviewed the rubric with students, in an effort to key them explicitly into the skills and technique I am hoping that they will demonstrate, despite routinely highlighting them in classroom instruction and various reading selections.

Once they have done a round of peer feedback and submitted the summative narrative task and completed the post-test, I am going to have students conduct a self-assessment.

  1. I will ask each student to score their summative narrative task with the rubric, prior to submitting it.
  2. I will hand each student their pre-test and ask them to score it with the rubric.
  3. I will hand each student their post-test and again ask them to score it with the rubric.
  4. I will then ask them to write narrative feedback about the difference between the two scores, specifically focusing on what they have identified as improvement and why.

I am considering sharing the scores I gave each student on both the pre-test and post-test, and asking them to consider any potential discrepancies between their scores and mine, but I am still undecided on this point.

Turning Summative into Formative

Since I have students complete an end-of-semester writing portfolio, this exercise will be good preparation for a more general, reflective self-assessment that accompanies the portfolio, like the student excerpt included above. Keeping with a broader strategy of looping many of the tasks and skills over the length of the course, this narrative self-assessment becomes a rehearsal for the portfolio one.

All three assessments then become fair game for revision, thus transforming a summative assessment into a formative one. Students may choose which piece that they would ultimately like to include in the portfolio. Since each one is a story, it can become more difficult to decide which story they want to revise, develop further, and include as their best of the narrative bunch along with the other modes and genres that comprise the portfolio.


In the end, I am blending a number of concepts celebrated in this class in my teaching practice, sometimes in a number of simultaneous ways. Occasionally, I wonder if it can become too complicated for my students. However, the only thing I am truly concerned about is that students are able to learn, improve, and demonstrate their learning in a few different ways. This is also a message that I repeatedly try to impress upon them over the length of the course.

Attempting many alternative assessment methods requires a pretty substantial initial investment of time and energy in developing relationships,  setting expectations, and building trust. It may be a bit ambitious, but I can say that the results have been relatively successful, especially as I continue to refine and advance my reasons, approach, and methods.