Tag Archives: portfolios

Reflections on Unit 5: Assessment, Rubrics, and Portfolios


flickr photo shared by ccarlstead under a Creative Commons ( BY ) license

Note: This post is an extended reflection from the EdTech Team’s Teacher Leader Certification Program. I am participating in the initial cohort.

General Thoughts on Assessment and Writing as Assessment

When it comes to assessment, I have to admit that a lot of my thinking is heavily influenced by my background as an English teacher and specifically involves writing instruction. A major tenant I subscribed to when teaching writing was the idea that I did not teach writing as much as I taught young writers.

Consequently, this has always made me highly suspicious of formulaic approaches to assessment and rarely interested in low-hanging fruit, like multiple choice. Writing has always seemed about as authentic an assessment as it gets. Plus, I am not terribly interested in formulaic writers.

Still, show me what a student writes and I can see what he thinks, more or less. Plus, I am one of those that keeps advocating that we expand our notion of what constitutes a text, which means there is a whole mess of possibilities when I use the term writing. I am definitely a fan of Brian Kennedy’s idea that everything is an image, including text. Therefore, School needs to be a place where students learn how to communicate through an array of forms, genres, and purposes. The more practice the better.

Authenticity in Assessment

There are different kinds of authenticity when it comes to assessment. There is the nature and purpose of the task being used to assess but there is also the actual assessment that a student receives after having completed the task, be it feedback or more. What kind of information that students receive interests quite a bit.

When it came to assessment, the best way I learned how to help and teach young writers was always through interventions that were early and often with a gradual release. I think we teachers often underestimate how hard it can be for students just to get started at anything. So a lot of my approach involves helping more in the earliest stages, showing students a few possible paths, and then encouraging them to pick one and see what happens. So much of that approach ends up being far more formative than summative.

I like to say that I spent over ten years of my teaching career trying to make grades as meaningless as possible, which meant I spent a lot more time giving feedback and a whole lot less time giving grades. I even eschewed putting grades on papers altogether at times. This was not always the most popular approach, but there are a lot of benefits.

One of the main aims for more feedback and fewer grades was to begin giving students the tools to strengthen their ability to self-assess their own work. I used to say often, “In the long run, self-assessment is the only kind that really matters all that much.” Still do. That does not always play well with high schoolers but I still believe it. There are plenty of students that get it, too.

They may not be completely autonomous at this moment in their lives but they know that it is coming. They are in the midst of major transition and are starting to get a sense of where their true strengths and weaknesses are. As educators, we need to help them identify and play to their strengths. They may need to work on weaknesses but their strengths will take them far further than the work on their weaknesses. Plus, strong, honest self-assessment can get awfully authentic.


flickr photo shared by tengrrl under a Creative Commons ( BY-SA ) license

Alfie Kohn and Rubrics

When I was getting certified and taking courses, I distinctly remember thinking that Alfie Kohn was ridiculous. The little I read of his work at that time I felt was completely Pollyanna nonsense.

How little I actually understood about anything.

One of my most fascinating transformations over my teaching career involves my response to Alfie Kohn. Having taught for over ten years, I now think just about everything he writes is as sharply focused and accurate as possible. The longer I have been a teacher the more I have sided with him on just about everything.

I had not seen “Why the Best Teachers Don’t Give Tests” prior to this class but could not agree more with the argument he is making. In fact, this article articulated a host of things that I have believed and tried argued to little or no avail for quite a few years. He communicated them far better than I did no doubt, but I am amazed at how many people passively ignore most of these sentiments.

My favorite among them is his section on rubrics.

I must confess, I had no idea what a rubric was until I began education school as an adult. As I have grown to understand them more deeply, I have come to the conclusion that the rubric is this era’s grading curve.

When I was a kid, students were routinely graded on a curve under a completely misguided application of a tool that works in one context but not another. Of course, when looking at a test performance of say 30,000 students, a bell curve is a very likely distribution of scores. However, in a sample size of 30 or less in a classroom, it is tantamount to malpractice.

Similarly, rubrics are tools to unify scoring across multiple assessors on a standardized, normed test. Again, it is the preferred tool for hundreds of scorers looking at the work of 30,000 students. Interestingly and somewhat ironic, the rubric scores of multiple assessors for those thousands of students would likely fall into a bell curve.

In most K12 classrooms, a single teacher is grading 30 or fewer students in a far from normed context. Using rubrics in this way is a misuse of the tool. It is an application that does not correspond with its purpose or function. Still, it has not stopped their proliferation.

It makes sense to use them, in preparation for the kinds of tests where they are used, like a practice state assessment or AP test. In that way, students can approach that kind of writing almost as a genre task. Yet to use them to assess writing in a class of 20 students is often an invitation for producing formulaic, “standardized writers,” as Kohn quotes highlights when quoting Maja Wilson.

This does not mean that I dismiss rubrics altogether. In truth, the best thing about rubrics, especially in the more common misused classroom context, involves the process of making them, either alone or with students. Creating a rubric from scratch can be an excellent way to focus on what the most important elements are in a given task. However, that process need not necessarily render a rubric, as they are typically known. Instead, a kind of grading checklist can more than suffice and be useful for teachers and students. It clearly tells the students, “This is what must be included in the work.”

The next part of making a rubric, the descent into the categorization of levels for accountability purposes with scores and such often degenerates into arbitrary parsing and superficial cover for standardized subjectivity. On that level, rubrics become another tool for ranking and sorting students, which is something I have always had very little interest in doing as a teacher.

Digital Portfolios

I began using portfolios within my first year of teaching and never stopped. They can be challenging to manage as a teacher. However, there is no better way to get a sense of what a student knows and can do then by using a portfolio.

When I migrated writing portfolios from analog to digital, the biggest challenge had to do with the drafting and iterative process.

Google Docs draft history is not an entirely accurate representation of familiar analog draft versions. This is different when trying to get a snapshot at a particular moment since Google archives essentially on the fly. So it is harder to lock down a particular version, at a given moment, as a window into how a piece has evolved.

Digital writing  can always be open to modification, which is great, in some ways. Still, getting a sense of a document’s evolution becomes considerably more fluid. Preserving iterations at specific points can be done, of course, but it needs to be planned and adjustments need to be made.


flickr photo shared by HereIsTom under a Creative Commons ( BY-NC-ND ) license

Technology Benefits for Assessment

When it comes to using technology in assessment, I want to believe that it is more beneficial but I have my reservations. We are on the threshold of a major turning point in assessment. In the United States, the rush is on to make all major standardized assessments computer-based. So for better or worse, technology-based assessment will become increasingly common.

One of the benefits is the speed that feedback can be delivered, which satisfies an instant gratification itch. Ironically, the big standardized tests have yet to be able to deliver on the promise of faster results in any meaningful way.

There are definitely applications of technology-based assessment that can be effective, particularly on the formative front. When done well, the ability for a teacher to quickly capture some basic data in a fast, external, and retrievable way can really help inform instruction.

Yet, the problem for me is that technology-based assessment privileges certain kinds of assessment over others, making them far more likely to be commonly used. For example, technology has made multiple choice items easier than ever to create, deliver, and score. The trouble is multiple choice items are not a terribly good way to assess students. We sacrifice assessment quality for expediency. This not so much a technology-based impulse as it is a market-based one.

Recently, Valerie Strauss in the Washington Post‘s “Should you trust a computer to grade your child’s writing on Common Core tests?” reignited the controversy of computer-based scoring of writing. I have already mentioned that I believe writing to be one of the better forms of assessment. Moreover, I find the notion of computer-based scoring fundamentally flawed.

What truly is communicated to students when educators, at any level, ask them to write something that will not even be read by a human?

I have written about this before and likely will again but that is for another post.

Ruminations on Assessment as Learning

Photo: framed

framed – cc licensed ( BY NC ND ) flickr photo by eyemage

As I wrap up my Beyond Letter Grades experience, my last badge effort involves contemplating assessment as learning, which I must confess is a bit of a slippery subject. It overlaps so much with terms like assessment for learning and assessment of learning that it is pretty easy for them to start blending together. Honestly, I am not sure that I see enough difference between as and for to make a significant case for them being separated.

Modifying Portfolio Assessment

For years I have employed a writing portfolio as the single most important task of my classes. As I have changed schools, schedules, and students, it is one thing that has remained in place as part of my practice. In this sense it is less a lesson and more an assessment. However, it has remained a fairly foreign concept to most of my students and requires definite preparation, which takes the form of a series of short lessons. It is always a bit onerous to tackle in a single one.

On a superficial level I modify the portfolio requirements all the time depending on what the students have accomplished over the course of the semester. Unfortunately, the school where I now teach uses a semester-based system, which means that there is some minor potential turnover of students at the break every year.

Semester vs. Year

Consequently, I ask for a portfolio at the end of each semester, although I feel like the results were better when I have worked with a year-long schedule. With a year-long portfolio, there is a much longer developmental arc and the thread of learning can be more consistent over that time.

For me, as well as my observation of students, semesters tend to truncate the natural flow of the school year, compressing desired outcomes into even more tightly bound boxes, which may or not be reasonable for some students. By the time a high school student has adapted and begun making deep progress the semester is over and a new one begun. I have always felt that it takes most students about two-thirds to three-quarters of the year to be operating at their peak level. Shortly after that is the sweet spot, where I have always looked to get the best assessment of learning. Prior to that it is all about feedback loops and improvement.

Nevertheless, I use a semester portfolio, which includes a reflection on the selections and the process of creating them, which I wrote about for the self assessment module. Yet, I have always felt that this task needs more scaffolding to better reach students at a variety of different ages, levels, and abilities. This unit, in conjunction with a handful of others, got me thinking about how to do just that. I think the answer may be through a lens of assessment as learning, a series of scaffolded student experiences.

Adjusting the Assessment Lens

Photo: Lens (160/365)

Lens (160/365) – cc licensed ( BY SA ) flickr photo by Andy Rennie

In essence Beyond Letter Grades has already sparked this change. Building on the work from the self assessment badge, I will ask students to engage in a series of self assessments that will grow in depth and complexity.

Beginning with a closer self assessment of the main “summative” task in the narrative unit students are completing, students will get the first formal formative self assessment experience. While I explained this particular plan in greater detail, here is the quick summary. Students have two drafts of a long narrative they have composed, one completed before and one completed after a round with a peer response group. The amount of feedback each student receives varies, but all groups include three students.

Considering the limits of time and peer feedback the differences between the two drafts will be somewhat limited. This means that the changes are likely to be limited as well, and thus easier to identify and explain why they were made. Students also were given a rubric by which the narrative will be assessed to use as an additional reason for making changes. I will ask students to highlight the changes between the two drafts and explain what prompted the revisions and why they were made. Previously I was only contemplating this move. Now I am committed to it. This should be take about half a class session.

Additionally, within a couple of days of this first experience, I will present students with both a pre-test and post-test narrative assessment and ask them to identify the changes they can observe between the two pieces. This is a more complex task given the length of time between the two compositions and the number of potential technical areas growth. Also, there is no group feedback for this task. However, a rubric will again assist the identification of changes. Similarly, student will be asked to identify what has changed and improved, as well as what they believe the reasons are for the changes. My hope is that this experience will not require a full class session but it certainly could.

As I transition students to a more expository writing focus, I will repeat a similar comparative methodology. Having saved a brief expository sample from each student a couple of weeks ago, I will return it to students and ask them to assess their own sample using a specific criteria. I will then give them another copy of the sample assessed by me using the same criteria. Again, students will be asked to compare the pieces, identifying the differences. This task’s complexity will increase is by having students then rewrite the sample, as well as document why they made certain changes. This is probably a full class worth of work.

These three formative experiences should be preparatory for the kind of self assessment I am hoping to see when they assemble and submit their portfolio of revised pieces they have selected to best show their learning. I suspect that there may be a one two more experiences along the way that will assist, but I will have to wait and see what emerges from looking at student work through this new lens.

Additionally, I have to remain sensitive to the students needs and progress. While I want them to have a few reps of self assessment in hopes of building a deeper more reflective disposition, I do not want to fatigue them on the concept. If I cannot find ways to increase the complexity of the task or reflection I probably need not add another rep.

Concluding Thoughts

As I see it, the key to cultivating assessment as learning is framing activities in the course around different types of formative and summative feedback, being prepared to transform any summative assessment to a formative one when needed, and scaffolding self assessment in such a way that students gain a deeper capacity for reflecting on their own work and processes.

More than anything, reading the excerpt from Rethinking Classroom Assessment with Purpose in Mind and ruminating on how to apply some of the principles has lead me to believe assessment as learning may be more about creating a cultural disposition in class. One that both encourages and honors student’s monitoring, assessing, and ultimately evaluating there own performance. It has to become a habit of mind of regular practice for it to really be successfully realized.

Contemplating Student Choice in Learning

In one of our recent teacher meetings I riffed off a classmates phrasing when talking about students and choices, coming up with “It’s hard to have a voice if you don’t have a choice.” It ran out of my mouth before I could really stop it. I think it’s because I really believe it. Plus, since becoming a teacher, I have succeeded and failed at affording students choice a lot, but I keep trying to find ways that allow and encourage student choice.

This is not always an easy objective. Certainly in more traditionally conservative pedagogical models, there is little room for student choice. In the English department, choice is often relegated to elective classes like creative writing. Yet even in elective classes, apart from choosing the course, there can be remarkably few options in terms of assignments or lines of inquiry. Fortunately, there seems to be a groundswell of change in the air in the field of education. All I can say is that is a long time coming.

After attending graduate school in pursuit of a license to teach, the university I attended was deeply influenced by progressive, constructivist theory. Student choice was something we regularly discussed in classes. It seemed like this was the way things were to be. Upon entering the profession I was surprised at how little opportunities students had to demonstrate their understanding in self-selected ways. Of course, making arrangements for this kind of assessment is a challenge, and I am by no means expert at doing so. However, I continually challenge myself to find ways to accommodate student choice in my practice.

On a grand scale, the first element of choice entered my practice almost as soon as I started teaching, when I began employing a portfolio approach with my English classes. While there are some universal elements, such as formatting and self-reflection that all complete, students are required to choose samples that best represent their work in the class. This portfolio of work typically contains pieces that have gotten feedback and been revised multiple times. The portfolio is the single most significant grade for any given term. I do this so the student controls what is included and what is subject to such a major assessed. I have been using portfolios in nearly every class I have taught since the second semester I began teaching.

Still, a portfolio is only one aspect of choice. Often it is selecting from a lot of options that are similar for every individual. The outputs fro given assignments may not always vary as much as I would like. To me the real challenge has always involved building options into individual assignments, which has proven far more tricky.

On an assignment scale, one of my favorite examples of student choice is the I-Search paper. Originally created by the late Ken Macrorie, in a 1988 published work he coined as the first context book, The I-Search Paper, it is a writing-to-learn spin on the typical research paper assignment. I-Search is a framework that focuses on student interests and the process of real research, the kind researchers do. Part narrative and part inquiry, students must develop their own research question that guides their investigation and answer. It took me three years to convince my colleagues to implement it in my current school, replacing a previous literary based research paper that produced semi-plagiarized faux research and a hackneyed paper.

What is encouraging is that the trend toward student choice seems to have gained genuine momentum in the last year or so, particularly in the realm of English. More and more references to student choice are being presented in the field. In fact, while I was in Orlando for the National Writing Project Annual Meeting, an educational organization that has long been advocating student choice, a colleague of mine mentioned that it seemed to be the theme of nearly all of the presentations that she had seen at the simultaneously occurring National Council of Teachers of English Convention.

So there is definite reason for hope. Nevertheless, I will keep trying regardless of the trend.