This week, the New York Times published an article called “Two States. Eight Textbooks. Two American Stories” exploring how textbooks in the state of Texas tell the story of United States history versus how textbooks from the state of California do it. If you know anything about the politics in either of those states, there are some predictable differences.
As the article mentions, this isn’t a new thing. In fact, until very recently, I used the controversy surrounding a fourth grade textbook in Virginia as a case study in my information literacy courses. In that case, the textbook in question (which was called Our Virginia) included a number of egregious historical errors, like one about how slaves fought for the Confederacy in the Civil War, something that’s not supported by historical evidence. When asked about the errors, the textbook author, who was not a historian, said that she based her writing on information she found on the internet. Information authored by a group called the Sons of Confederate Veterans.
This was 2010, so the main reaction at the time was basically everyone laughing at this author for basing the research for her book on something she found on the internet. No one seemed to consider the possibility that the issue might be more complicated than that.
As an information literacy case study, I’ve gotten a lot of mileage out of this example by asking students who they feel is most to blame for what happened: the textbook author, the publisher, the Board of Education that approved the book, or the Sons of Confederate Veterans for promoting a view of history that’s not supported by evidence. Their answers are revealing. Of course, a lot blame the author herself for not doing proper research. Others blame the publisher for not fact-checking thoroughly enough. Others feel that the Board of Education should have done more to vet the book before allowing it to be taught in classrooms.
Almost none blame the Sons of Confederate Veterans.
In my last post, I talked about how, like most instruction librarians, when I teach students how to evaluate information, I used to rely heavily on tools like the CRAAP test to help students through the process. In some cases, particularly one-shot sessions where there may not be time to teach students the nuances of this process, I still do. If nothing else, they are a handy way to help students learn that they should be thinking about the quality of the sources they are using. And they fit well on a handout.
The CRAAP test and similar tools were (arguably) a good fit for Standards-based teaching, where the evaluation of information was an explicit learning outcome. But even before we traded the Standards for the Framework, a lot of librarians were dissatisfied with these tools because they oversimplified the evaluation process. The CRAAP test in particular seemed to mostly only apply to internet sources, giving students the false impression that these were the only types of sources that needed to be evaluated. Plus, they didn’t stop students from just choosing whatever came up first in their list of search results, regardless of whether it was clearly biased or too old.
These criticisms are particularly true when you take into account the contextual nature of research, as the Framework does. Currency, for example, is important for some research topics, like those based on technology or science, but less so for others, like history and literature. And that’s just within academic and scholarly research. There are also likely to be differences in how information is evaluated in professional, creative, and scientific contexts as well.
While I don’t have much flexibility with one-shot sessions, I wanted to start thinking about how to adapt my usual lessons on evaluating information to also get students to think about the contextual nature of research. It turns out, it didn’t need much tweaking.
You may not be aware of this, but every year the ACRL Instruction Section Teaching Methods Committee puts out two lists of Selected Resources, one focused on teaching methods and instructional design and the other focused on assessment. These lists feature articles and other materials that have been published the previous year that are worthy of note. It’s a resource that doesn’t get as much use or attention as it should, so I’ve decided to assign myself the homework of making my way through each item on last year’s lists and write about it here.
Disclosure: I am currently a member of the ACRL Instruction Section Teaching Methods Committee, which selects and evaluates materials for the Selected Resources lists. I played a role in the selection process and reviewed several of the items that ended up on the final list as part of that process.
When I give students a research project, the first thing I ask them to do is propose a set of three topics. For my freshman seminar students, the three topics should be related to a question they still have about college life. For my information literacy students, the focus is on their role as information creators. The reason I ask for three is partly to increase the chances that one of their ideas will be researchable. The second is to try to force some creativity. Coming up with one idea might be easy. Coming up with three? That takes a little more thinking.
Unfortunately, it doesn’t usually work. About 80-90% of the topics students propose are standard academic research topics, ones that they probably think will get them a good grade rather than things they are actually interested in pursuing. Not exactly on the level of topics that get banned because professors are tired of reading about them, but in that same vein.
A course instructor I once worked with had the same issue. She wanted her students to write about topics that interested them or that were fun for them. I tried to model this in the session I taught for her students by using an example research topic related to Doctor Who. But most of them were writing about things like the legalization of marijuana, video game violence, and whether college athletes should get paid. In other words, the kind of essay topics that typically show up on state tests.
Now, it could be that the students had some genuine interest in these topics but it was also obvious that these topics were not fun for them. This despite the fact that their professor and I both encouraged them to pick fun ideas.
Early on in the information literacy course I teach each semester, I introduce students to a couple of common myths about research, things students commonly believe because of their experience with academic research. This includes things like “research is about finding the right answer” and “citation sucks” (which I tell them isn’t really a myth because, well, citation does suck).
Now that I’m spending some time thinking about the role of research in creative writing, I’m finding that there’s a whole other set of myths/beliefs that keep cropping up, ones that I hadn’t thought about or that don’t apply to the type of research I usually teach.