I have often remarked to colleagues and students even that if it were up to me there would be only three grades: Meets Expectations, Above Expectations, and Below Expectations. This perspective seems like it would be compatible with competency-based assessment. Yet, I must admit that I have always been a bit dubious of competency-based models of education and assessment. This is true despite feeling like traditional letter grades don’t cut it either.
In looking at the various resources curated by Dr. Bull, my doubts have softened a little – but only a little. I can now see where competency-based assessment could be useful in limited applications. When measuring certain lower order thinking, specific content understanding, or singularly isolatable skills, I can see competency-based models working quite well, in fact. It seems tailor made for tasks that are simpler and consequently easier to assess. I could also potentially see where there might be value in competency-based assessment in some vocational education settings, hence my using the image of a plumber above.
However, the rationale behind my doubts have also somewhat strengthened. Perhaps it is my deep belief in the ultimate value of a humanities-based, liberal arts education, one that fosters nuanced general knowledge and understanding, that makes me deeply skeptical of competency-based assessment. It seems too focused simply on the what and not at all on the how. Plus, like the critique penned by Alison Wolf for the UK’s Higher Education for Capability project, I do not believe that deep complex learning can be atomized in such a way that fulfills the promise of competency-based assessment.
Plus, it seems far too-close to one of many current fads, results only assessment, the perverse educational application of an idea dreamt up by couple of Best Buy executives for managing their corporate labor force. I understand that competency-based and results only assessment are not the same, but they seem too similar for my liking.
There seems to be an almost blind faith in the power of the result or product in these models. This strikes me as potentially dangerous in a learning context. Too often I hear things in education like, “You don’t have to re-invent the wheel.” However, sometimes the task is actually about inventing the wheel, assisting students in making a discovery in a novel way that may be very new to them. A such the process involved becomes rather important, arguably more important than the final product, in fact. There doesn’t seem to be as much room for process in competency-based assessment. I am not even sure if it matters in this model.
Also, competency-based models seem utterly too simplistic and potentially seriously threaten one of those load-bearing walls of education. Anecdotally, talking with some teachers that have switched to competency-based report cards, for example, gave me the impression that it was a disaster. One of the main comments explained how parents had no idea how to read the report card or what the collection data actually meant. Plus, it provided little nuance about how well their student accomplished anything.
Looking at one of the examples in the resources, I was left feeling like in practice it is not altogether that different from traditional letter grades. In the document intended to explain the competency-based system to parents in the Rochester School District (New Hampshire), there are still five divisions of competency, Advanced Competent, Beyond Competent, Competent, Not Yet Competent, Insufficient Work Submitted. I don’t know about anyone else, but this looks awfully similar to A, B, C, D, and F. In fact, the abbreviations track as A, B, C and the NYC or IWS grades will switch to an F if there is no change by the course’s end.
Common Grading Practices
The only thing I can see that is potentially different is that there is an emphasis on common assessment practices. However, that can be done without adopting a competency-based model. A significant value is placed on rubrics, which makes sense in trying to achieve commonality. However, not every task or assignment needs be a common one. Down that path lies one-size-fits-all madness that is not at all about learning and all about management. What’s more, I struggle to see how competency-based assessment personalizes anything about learning. I would submit that it depersonalizes feedback, like any rubric.
While I think there are definite applications for rubrics, those applications are also limited. What educators commonly refer to as rubrics were developed to manage large quantities of norm referenced tests. In this context, rubrics make a lot of sense, but they do not necessarily make sense in every other context. I often wonder if the current obsession with rubrics in education will not turn out to be this eras grade curve, which also makes more sense with large quantities of data points.
Moreover, I would say that common assessment practices are solutions to more administrative problems and not teaching and learning problems. They have the appearance of greater validity, although that is not at all a given. They also seem more systematic and fair, although they could be systematically flawed. Lastly, they most likely reduce complaints and position any bitter individual complaints more precariously against the power of the institution.
My understanding of competency-based assessment remains limited. I have only scratched the surface of the concept in the examining the resources. My brief brushes with where I work have been less than convincing, and the anecdotal evidence I have gathered has only served to strengthen my doubts. Still, I can see where in some limited contexts competency-based models could be serviceable and even beneficial. I am not sure that this model serves me as an English teacher all that well. In some ways, it seems similar to badges, although I think I like the binary aspect of badges more. That is definitely not a ringing endorsement.
Experimenting with the Model
One area that requires attention to detail and basic competency from my ninth grade English students involves submitting work in the proper format. My colleagues and I are expected to prepare students to submit all printed academic work in accordance with MLA guidelines for all courses. Early in the year it is a challenge to simply get them to follow models and directions.
This year, in an effort to avoid the kind of leniency that produced students that struggled with simple formatting issues well into the second semester, I drew a harder line and refused to accept any written work until it was properly formatted. Even though I didn’t penalize anyone, it was a cause of great consternation for many students. Most simply did not bother to take the time to follow directions. However, some genuinely cannot distinguish where they may have addressed most but not aspects properly.
This seems like a good context to apply competency-based assessment. The objective is fairly simple and can be itemized. It is the checklist aspect of the assessment that made it most appealing to try. Most students whose work did not correspond with the guidelines struggled to understand exactly why. Thus, I have devised a rubric-like checklist that itemizes all the basic elements of MLA format, without getting into the more complicated aspects of citation, which we will address much later.
Using a tool like this, can help provide a tighter focus. This should reduce some of the anxiety associated with endeavoring to address the errors and allows me to refine the feedback I provide students. Any unchecked areas comprise an itemized list of errors that a student must address before resubmitting the work for additional more substantive feedback or grade. I am not sure that it is more personalized or whether it is motivating, but it may prove challenging for many students.
On some level, this is almost like a pre-assessment, since it is a way to screen written assignments prior to actually reading them and providing genuinely, substantive feedback. I am hoping that it will speed the process for students to assimilate MLA format without it becoming a distraction or a more significant issue than it deserves to be.