In my last post, I talked about how, like most instruction librarians, when I teach students how to evaluate information, I used to rely heavily on tools like the CRAAP test to help students through the process. In some cases, particularly one-shot sessions where there may not be time to teach students the nuances of this process, I still do. If nothing else, they are a handy way to help students learn that they should be thinking about the quality of the sources they are using. And they fit well on a handout.
The CRAAP test and similar tools were (arguably) a good fit for Standards-based teaching, where the evaluation of information was an explicit learning outcome. But even before we traded the Standards for the Framework, a lot of librarians were dissatisfied with these tools because they oversimplified the evaluation process. The CRAAP test in particular seemed to mostly only apply to internet sources, giving students the false impression that these were the only types of sources that needed to be evaluated. Plus, they didn’t stop students from just choosing whatever came up first in their list of search results, regardless of whether it was clearly biased or too old.
These criticisms are particularly true when you take into account the contextual nature of research, as the Framework does. Currency, for example, is important for some research topics, like those based on technology or science, but less so for others, like history and literature. And that’s just within academic and scholarly research. There are also likely to be differences in how information is evaluated in professional, creative, and scientific contexts as well.
While I don’t have much flexibility with one-shot sessions, I wanted to start thinking about how to adapt my usual lessons on evaluating information to also get students to think about the contextual nature of research. It turns out, it didn’t need much tweaking.
Here’s how the lesson works.
I start by giving students a list of criteria that can be used to evaluate sources. The criteria from the CRAAP test is in there as are other criteria that I’d read about in studies of students’ research behavior, like one by Kim & Sin(1). The list looks like this:
- Currency: How up to date is the source?
- Relevance: How closely does the source match your topic?
- Authority: What is the author’s background?
- Accuracy: How well-researched is the source?
- Purpose: Why was the source written?
- Ease of access: How easy is it to access the full text of the source?
- Format: What type of source is it? (i.e. book, scholarly article, website, etc.)
- Comprehensiveness: Does the source include all of the needed information?
- Cost of use: Does the source cost money to use?
- Ease of use: Is the source easy to understand/use?
- Ranking: How high or low in the search results does the source appear?
- Scholarly: Is the source scholarly in nature?
- Agreement: How well does the source support what you know/believe about the topic?
- Familiarity: How familiar is the source?
- Other: Please specify
The reason I use a list rather than have students create their own is because I think having a list, even if it’s not perfect or complete, gives them the language to convey something that they’ve probably never stopped to think about before. It’s also in line with Larry Michaelsen’s Team-Based Learning school of thought where activities in class should be based on the 4S’s: significant problem, same problem, same choice, simultaneous reporting.(2)
From the list, I ask students to come up with what they feel are the five most important criteria for evaluating information for a research assignment. They do this first on their own and then I ask them to get into groups and negotiate with each other to come up with a shared “Top 5” list, ranked in order of importance. Once they’ve done that, I ask the first group to share the top item on their list, the second group to share the top item on theirs, and so on.(3)
If you’ve never done an activity using the 4S structure before, this is where the magic really happens, especially when all of the groups have different ideas about what should be at the top of the list. It starts a really good conversation. Sometimes, though, the groups all say pretty much the same thing. When this happens, I ask them what the second thing on their list is and why they felt it was important but less important than the first thing. Either way, the discussion challenges them to think about what’s important for evaluating information for a research assignment.
How does this get at the contextual nature of research? That’s the next step.
After each group explains their choice for the most important criteria, I ask them to think about a situation in which the item they picked as their topic choice is not important.
The currency one that I mentioned earlier is one that students come to relatively easily. But what about authority, which is another one that comes up relatively often? For this, students might talk about Wikipedia and how sometimes they look up information on Wikipedia for background information on a topic they’re interested in but not necessarily researching.(4) What about scholarliness? That only matters if your professor tells you that you need to find scholarly information. What about relevance? Okay, that one always matters but how you interpret relevance may be different depending on the situation.
Interestingly, when I ask students which criteria aren’t that important for evaluating sources, they usually point to “ease of access.” But the research conducted by Kim & Sin indicated that students’ research behavior suggests that ease of access is actually among the most important criteria students use when searching for information. When is ease of access not important? When you don’t have a fast-approaching deadline.
So I get why tools like the CRAAP test get a bad rap but I think there are ways of using them that can still get at some of the nuances we’re now allowed to talk about thanks to the Framework. Using this particular activity to talk not just about evaluating information but also the contextual nature of research with my students has opened up a lot of really interesting conversations in my own classroom.
(1) Kyung-Sun Kim and Sei-Ching Joanna Sin, “Selecting Quality Sources: Bridging the Gap Between the Perception and Use of Information Sources,” Journal of Information Science 37 no.2 (2011): 178-88.
(2) You can read more about these on a website I created for an old LOEX presentation and forgot about until it showed up in a list of search results on Google when I was trying to look up an explanation of the 4S structure while writing this blog post—the internet is fun!: http://infolitfinetune.weebly.com/what-are-the-4ss.html
(3) This is not quite simultaneous reporting but it’s close enough that it seems to achieve the same results, in my experience, since each group will have already committed to their answer.
(4) Students’ assumption that Wikipedia is not authoritative might be up for debate. I do tell them that the case for/against Wikipedia is more complicated than what they’ve probably been told by their high school teachers but when I tell them this they usually look at me like I’m trying to lead them into a trap.