You may not be aware of this, but every year the ACRL Instruction Section Teaching Methods Committee puts out two lists of Selected Resources, one focused on teaching methods and instructional design and the other focused on assessment. These lists feature articles and other materials that have been published the previous year that are worthy of note. It’s a resource that doesn’t get as much use or attention as it should, so I’ve decided to assign myself the homework of making my way through each item on last year’s lists and write about it here.
Today we’re all about “A Short History of CRAAP” by Mike Caulfield.
Disclosure: I am currently a member of the ACRL Instruction Section Teaching Methods Committee, which selects and evaluates materials for the Selected Resources lists. I played a role in the selection process and reviewed several of the items that ended up on the final list as part of that process.
Let me first state that I believe Caulfield’s piece is more than deserving of its place on the Selected Resources list. However, my personal appetite for snarky takedowns of common teaching methods is somewhat limited, especially when no new option is offered as part of the takedown. We all know the CRAAP test(1) sucks, but what is the alternative? Though some of his blog posts do seem to be more practical in nature, Caulfield doesn’t suggest any ideas here. Instead, his message is much more about exploring how the CRAAP test came to be and why it doesn’t work.
In tracing the history of the CRAAP test, Caulfield shows that it was originally created in the pre-internet era as a set of criteria librarians used to decide whether to purchase a resource, not as a teaching tool. I…will admit that I did not know this, though I kind of suspected it. There is something very “librarianish” about the CRAAP test and it has always seemed to me that, as a teaching tool, it fits in better with the more old-fashioned approach to information literacy, the goal of which seemed to be to turn students into junior librarians.
But I don’t think the origins of the CRAAP test should really be a factor in disqualifying it as a teaching tool, as Caulfield seems to imply. Who cares where it came from? The question is whether the criteria it lists are important for evaluating a piece of information or not.
I would argue that they are, albeit in a simplistic way. I would never argue that they should be treated as the beginning and end of someone’s evaluation of a piece of information but they make a good starting point, especially for those who are new to the research process.
So, okay. Here’s where I admit that I use the CRAAP test in my teaching. I use it in two ways. The first is for one-shot sessions because trying to teach students anything meaningful about evaluating information in the space of a single session is an exercise in futility and I would rather give them something than nothing. It fits well on a handout, is what I’m saying.
The other way I use it is in my own courses, where I present students with a list of criteria for evaluating sources (which includes the elements on the CRAAP test and others that I’ve found in other teaching tools and research studies). I ask them to look at the list and pick which they think are the most important criteria for evaluating sources when it comes to completing a research assignment. Then I ask them to think about situations where those criteria are not important. So, for example, currency is important when you’re researching something related to science or technology or health but not so important when you’re researching history or literature.
What I don’t teach them is to use this list as a checklist in which they have to find sources that meet every criterion or even just the ones that they’ve identified as important. Because I’m not interested in teaching them to fill out a checklist. I’m interested in getting them to think about how they evaluate information and giving them some vocabulary for articulating that evaluation process. So for me the CRAAP test isn’t a checklist, it’s a vocabulary list.
But even then, as with all teaching tools, it represents more of a convenience to the instructor than the way in which the evaluation process actually happens(2). We see this in research that shows that while students claim that factors like currency are among the most important when evaluating information, their behavior suggests otherwise. In fact, ease of access is the factor that they give the most weight to when deciding whether to use a source or not.(3)
Caulfield(4) also cites research from as far back as the mid-1990s showing that librarians have known all along that the CRAAP test, when used as a checklist, doesn’t work. A study he cites by Scholz-Crane shows that students generally only use one or two criteria when evaluating a source (though what those criteria are varies from student to student) rather than taking the holistic approach intended by the CRAAP test.
Maybe I’m naïve, but I’m a little surprised that the CRAAP test was meant to be holistic this whole time. Are we really supposed to only use sources that meet all five of the criteria on that list?
Either way, I can’t disagree that students are generally very bad at evaluating sources and even worse at articulating their evaluation of those sources, even when taking an approach like the one I try to show them. I recently graded some annotated bibliographies from my freshman seminar, where the topics chosen are supposed to be about college life today, in which every single student cited at least one source that was at least twenty years old. Every. Single. One. And this was after spending an entire class period on the activity I described above. An activity in which they actually all did very well. Or so I thought.
Granted, I didn’t require that the sources be new ones because I want to allow students room to cite older sources if they can properly justify the use of those older sources. But in the rare instance where students acknowledged a source’s age, the best that I could hope for was that they would say something like, “This source was published in 1995 but is still useful because it’s relevant to my topic and comes from the library.” In short, FML.
So I’ve spent more time here talking about my own thoughts and teaching than talking about Caulfield’s work, which despite my disagreements with I think is very deserving of a place on the Selected Resources list in no small part because there are definitely still librarians out there who, like me, did not know any of the history he lays out despite using the CRAAP test as part of our teaching. And also because there are librarians who still teach information literacy as if the goal is to help students become junior librarians who probably need to hear what Caulfield has to say.
But ultimately I think the real problem is not the CRAAP test itself but the models that we use to teach information literacy, including one-shots and even short one-credit courses like mine, which force us to use convenient but ineffective teaching tools because there is no room to show students that information literacy is more than a basic skill.
(1) There are a lot of checklists like the CRAAP test but with different acronyms. For the sake of space and convenience, I’m going to be using “CRAAP test” as a catch-all term for all of these checklists throughout the post.
(2) Paraphrased from Robert J. Connors, “The Rise and Fall of the Modes of Discourse,” College Composition and Communication 32 no. 4 (1981): 444-55.
(3) Kyung-Sun Kim and Sei-Ching Joanna Sin, “Selecting Quality Sources: Bridging the Gap Between the Perception and Use of Information Sources,” Journal of Information Science 37 no.2 (2011): 178-88.
(4) Remember Alice? It’s a song about Alice.