Tag Archives: grading

Exploring Competency-Based Assessment

Photo: March 16, 2004 -Rough plumbing

cc licensed ( BY NC SA ) flickr photo by Craik Sustainable Living Project

I have often remarked to colleagues and students even that if it were up to me there would be only three grades: Meets Expectations, Above Expectations, and Below Expectations. This perspective seems like it would be compatible with competency-based assessment. Yet, I must admit that I have always been a bit dubious of competency-based models of education and assessment. This is true despite feeling like traditional letter grades don’t cut it either.

Possible Applications

In looking at the various resources curated by Dr. Bull, my doubts have softened a little – but only a little. I can now see where competency-based assessment could be useful in limited applications. When measuring certain lower order thinking, specific content understanding, or singularly isolatable skills, I can see competency-based models working quite well, in fact. It seems tailor made for tasks that are simpler and consequently easier to assess. I could also potentially see where there might be value in competency-based assessment in some vocational education settings, hence my using the image of a plumber above.

Lingering Doubts

However, the rationale behind my doubts have also somewhat strengthened. Perhaps it is my deep belief in the ultimate value of a humanities-based, liberal arts education, one that fosters nuanced general knowledge and understanding, that makes me deeply skeptical of competency-based assessment. It seems too focused simply on the what and not at all on the how. Plus, like the critique penned by Alison Wolf for the UK’s Higher Education for Capability project, I do not believe that deep complex learning can be atomized in such a way that fulfills the promise of competency-based assessment.

Plus, it seems far too-close to one of many current fads, results only assessment, the perverse educational application of an idea dreamt up by couple of Best Buy executives for managing their corporate labor force. I understand that competency-based and results only assessment are not the same, but they seem too similar for my liking.

There seems to be an almost blind faith in the power of the result or product in these models. This strikes me as potentially dangerous in a learning context. Too often I hear things in education like, “You don’t have to re-invent the wheel.” However, sometimes the task is actually about inventing the wheel, assisting students in making a discovery in a novel way that may be very new to them. A such the process involved becomes rather important, arguably more important than the final product, in fact. There doesn’t seem to be as much room for process in competency-based assessment. I am not even sure if it matters in this model.

Examining Examples

Also, competency-based models seem utterly too simplistic and potentially seriously threaten one of those load-bearing walls of education. Anecdotally, talking with some teachers that have switched to competency-based report cards, for example, gave me the impression that it was a disaster. One of the main comments explained how parents had no idea how to read the report card or what the collection data actually meant. Plus, it provided little nuance about how well their student accomplished anything.

Looking at one of the examples in the resources, I was left feeling like in practice it is not altogether that different from traditional letter grades. In the document intended to explain the competency-based system to parents in the Rochester School District (New Hampshire), there are still five divisions of competency, Advanced Competent, Beyond Competent, Competent, Not Yet Competent, Insufficient Work Submitted. I don’t know about anyone else, but this looks awfully similar to A, B, C, D, and F. In fact, the abbreviations track as A, B, C and the NYC or IWS grades will switch to an F if there is no change by the course’s end.

Common Grading Practices

The only thing I can see that is potentially different is that there is an emphasis on common assessment practices. However, that can be done without adopting a competency-based model. A significant value is placed on rubrics, which makes sense in trying to achieve commonality. However, not every task or assignment needs be a common one. Down that path lies one-size-fits-all madness that is not at all about learning and all about management. What’s more, I struggle to see how competency-based assessment personalizes anything about learning. I would submit that it depersonalizes feedback, like any rubric.

While I think there are definite applications for rubrics, those applications are also limited. What educators commonly refer to as rubrics were developed to manage large quantities of norm referenced tests. In this context, rubrics make a lot of sense, but they do not necessarily make sense in every other context. I often wonder if the current obsession with rubrics in education will not turn out to be this eras grade curve, which also makes more sense with large quantities of data points.

Moreover, I would say that common assessment practices are solutions to more administrative problems and not teaching and learning problems. They have the appearance of greater validity, although that is not at all a given. They also seem more systematic and fair, although they could be systematically flawed. Lastly, they most likely reduce complaints and position any bitter individual complaints more precariously against the power of the institution.

Preliminary Conclusions

My understanding of competency-based assessment remains limited. I have only scratched the surface of the concept in the examining the resources. My brief brushes with where I work have been less than convincing, and the anecdotal evidence I have gathered has only served to strengthen my doubts. Still, I can see where in some limited contexts competency-based models could be serviceable and even beneficial. I am not sure that this model serves me as an English teacher all that well. In some ways, it seems similar to badges, although I think I like the binary aspect of badges more. That is definitely not a ringing endorsement.

Experimenting with the Model

One area that requires attention to detail and basic competency from my ninth grade English students involves submitting work in the proper format. My colleagues and I are expected to prepare students to submit all printed academic work in accordance with MLA guidelines for all courses. Early in the year it is a challenge to simply get them to follow models and directions.

This year, in an effort to avoid the kind of leniency that produced students that struggled with simple formatting issues well into the second semester, I drew a harder line and refused to accept any written work until it was properly formatted. Even though I didn’t penalize anyone, it was a cause of great consternation for many students. Most simply did not bother to take the time to follow directions. However, some genuinely cannot distinguish where they may have addressed most but not aspects properly.

This seems like a good context to apply competency-based assessment. The objective is fairly simple and can be itemized. It is the checklist aspect of the assessment that made it most appealing to try. Most students whose work did not correspond with the guidelines struggled to understand exactly why. Thus, I have devised a rubric-like checklist that itemizes all the basic elements of MLA format, without getting into the more complicated aspects of citation, which we will address much later.

Image: Basic MLA Format Rubric

Basic MLA Format Rubric – click to enlarge

Using a tool like this, can help provide a tighter focus. This should reduce some of the anxiety associated with endeavoring to address the errors and allows me to refine the  feedback I provide students. Any unchecked areas comprise an itemized list of errors that a student must address before resubmitting the work for additional more substantive feedback or grade. I am not sure that it is more personalized or whether it is motivating, but it may prove challenging for many students.

On some level, this is almost like a pre-assessment, since it is a way to screen written assignments prior to actually reading them and providing genuinely, substantive feedback. I am hoping that it will speed the process for students to assimilate MLA format without it becoming a distraction or a more significant issue than it deserves to be.

Analyzing a Syllabus & Considering Letter Grades

Photo: 123/365  paper grade

123/365 paper grade – cc licensed ( BY NC SA ) flickr photo by Jennifer Tomaloff

As part of the Dr. Bernard Bull‘s new MOOC Beyond Letter Grades, an early task “The Affordances and Limitations of Letter Grades” asks for an examination of a syllabus with an eye on the pros and cons of how the letter grade is calculated for the course.

I wanted to examine someone else’s syllabus first, both as a model and exercise, mainly because I am never completely happy with any syllabus that I create. Plus, I tend to not like the templates for creating syllabi all that much either. Consequently, I found a a syllabus for a class I would be potentially interested in taking, a course on systemic functional linguistics (SFL).

Course

The syllabus I examined is Linguistics 481 – Functional Linguistics from Simon Fraser University in British Columbia, Canada. It was taught by Dr. Maite Taboda, in 2003.

How Grade is Calculated

Adding

Looking at the way that grade is calculated in this course breaks down the following way. There are essentially two short assignments, two exams (midterm and final), and a final paper, yet participation and presentation are also calculated as part of the overall grade.

Subtracting

Additionally, there is a 20% penalty mentioned for late work, which may significantly impact the grade of the two short assignments or the final paper. Attendance is also a critical feature in grade calculation. In fact, failure to appear for one of the exams without an verifiably excused absence will result in a zero. There are no make-up exam options.

Best Potential Grade Without Knowing Much

One thing definitely going for this course is the explicit mention of using a writing-to-learn approach, which bodes well for the newcomer’s opportunity to earn a high grade without much prior knowledge. Honoring, at least philosophically, the notion of writing-to-learn suggests a cultivation of greater understanding over the run of the course informs the work of the course.

Assessing the grade weighting suggests that one would have to gain pretty substantial course content knowledge to successfully accomplish the tasks assigned. The bulk of the course grade is driven mainly by content. Only 20% of the course is rooted in categories like participation and class presentation, which are potentially far more intangible elements.

Worst Potential Grade While Knowing Much

Similarly, since the 20% of the grade is calculated based on participation and class presentation, that leaves 80% split amongst the primary tasks of the course.

There is an interesting discrepancy between the weight of the midterm (20%) and the final exams (15%). Why the midterm exam value is greater than the final is unclear. This may be an attempt to counter-balance the weight of the final paper a little. Since the final paper (30%) is the greatest weight, the potential back-end calculations at the end of the term could amount to as much as 45% of the overall course grade, leaving 55% gathered along the way.

Given a slight misstep in the first short assignment, someone who is quiet and introspective but learned a lot, perhaps struggling in the early going, could be looking at a potential loss of 25-30% of an overall grade. Albeit that would be a maximum hit to take, but it is reasonable to think a 10-15% loss more likely. Throw in a late assignment or two and things could get quite ugly pretty quickly.

Takeaways

Examining a syllabus this way makes me seriously consider how I calculate grades, which is something I have been considering for quite some time. One tension that is in sharper relief for me is related to transparency. Anyone making an effort to make grade calculation as transparent as possible may inadvertently begin to paradoxically cloud the issue in the process.

Associating a series of weights and values seems to artificially lock the grade into a kind of mathematic inevitability, even if those numbers are not an accurate representation of the student’s learning. Trying to quantify things that are by nature slippery to nail down almost guarantees a certain degree of distortion. Yet, itemizing and categorizing the tasks in the overall calculation does not, by itself, necessarily make how the grade will be calculated clear.

When using categories a series of potential questions are generated. For example with the short assignments (15%) in this course: Are both assignments of equal value? Will the grades be averaged? Is any consideration take into account for growth between the first attempt and the second?

This exercise reminds me of just how dubious trying to glean a single letter grade for a course can be. They are a rather crude abstraction that can become even more abstracted in the unpacking.

Some Recent Thoughts on Grading

So as part of an yearlong effort, my high school is trying to examine grading practices. I am not entirely sure where it is all leading, but I thought I would post some of my thoughts from a recent online discussion that our school is having regarding the question, “How do you ensure that students’ grades are an accurate picture of their learning in your class?”

I am not terribly sure I like the question, or at least I think it is a weak question that obscures a path to an much deeper answer that is far more worthy of a stronger, more fundamental question. That being said, I have been giving a lot of thought to the nature of questions lately, how they are formed, how they shape the thinking that follows, how to craft better ones. Nevertheless, here were my initial thoughts.

On a fundamental level, I think that grades are deeply flawed in their ability to provide an accurate picture of learning in a class. They are far too abstract and are unsystematically abstracted from student learning. So from that standpoint, I am not convinced that the systems that are generally in place, both here and elsewhere, can actually accomplish this aim at all. The common grading system is far too laden with competing factors that render it Byzantine at this point.

Yet, the only way that I know to provide an accurate picture of learning through the use of “grades” is by engaging in consistent and rigorous conversations with students about goals and objectives, means by which those will be assessed, and always providing opportunity for the student to remediate those assessments in some way. Without those three elements I would challenge the accuracy and validity of any grade.

There will always be some measure of subjectivity or bias, but the assessor can take measures to limit or control them in an effort to be as objective as possible. Often it is not easy, nor is it nearly as scientific or coldly mathematical as we might like. There is an artificiality to grades that belies the spectrum of understanding or the potential for learning. Yet we, as the institution of education, continue to try and make the best of a bad situation, with highly questionable results.

Interestingly, I had a colleague read this before I posted it, worrying that it might seem too wonky or inaccessible. For some reason, I felt a bit more tentative about declaring some of my deep-felt thoughts about grading. Truth is I hate grading. It is absolutely the worst part of my job as a teacher. Most fascinating is knowing I am not the only one that feels that way.

I have written about grading in the past, some experimentation that I have tried and the results, and I even recently discussed some of my thoughts about all this with my classes. While I am still delaying a lot of grading this year, I am not making as big a deal about it with the students. Amazingly, there have not been as many outcries or much visible frustration yet. Yet, I made an effort to reassure all my students that they needed the opportunity to make mistakes in order to learn and that grades tend to get in the way of those efforts. They seemed to get it on the surface, at least.

Quite simply, I wish there were only three grades that expressed something along the lines of “not good enough or meets a/the ‘standard’,” “good or meets a/the ‘standard’,” and “beyond good or exceeds a/the ‘standard’.” I mean that is generally how we all nearly simplify any kinds of assessments in life, and I am not just talking about school or teaching. When I think back to every evaluation that I have gotten in a workplace, prior to becoming a teacher, that is about what those assessments amounted to. Sure they might have used fancy language or adopted some “quality” lingo and prepackaged form, but what always mattered most were the conversations that I had with the person or people doing the evaluation. And in every case those conversations were driven by the three qualifiers I outlined above.

I wish we could adopt something like this in schools. Instead we continue to insist on fragmenting the simple for increasingly more discrete pseudo-measurements, as if it is all so scientific, analytic, and smoothly translates into mathematical numbers. Yet to me that is all about sorting and nothing to do with assessment, and we have built entire institutional and societal norms on dubious methods of measurement.

The more distance a grade is from the context of the class and the teacher that give it the more distorted that grade is. Still, so much false value is placed on them and they are used as the currency for so many judgements. To me grades, as they typically exist today in schools, are the central properties in a increasingly widening distortion field.