Assessment and the contextual nature of research

Image by Wokandapix from Pixabay

So last month when I did a webinar for the GLA Carterette Series on some of my ideas for incorporating the contextual nature of research into information literacy instruction, there were a lot of great questions at the end about assessment. In answering them, I realized that this was something of a hole in my discussion of this topic and I wanted to see if I could address some of it here.

First, it might help to know why assessment is such a blind spot for me. Basically, the culture around assessment in my current institution is a lot different from what I think the norm is for most libraries. I experienced something closer to that norm at my previous institution, where we were asked to constantly assess student learning and some part of the library’s value (not to mention our value as a reference and instruction department within the library) was directly tied to our program-level learning outcomes and how well our students met those outcomes. All of this was, in turn, very closely tied to questions of student retention and the role the library played in the institution’s retention efforts.

Where I am now, there is certainly interest in making sure that what we teach contributes toward student learning and student retention. And there are conversations about finding a way to assess our teaching in order to speak to our value both in the library and on campus. But because instruction responsibilities here are so fragmented, any assessment effort on this level would require buy-in across several departments in the library. As you can imagine, there would be some difficulty there. For now, everyone just kind of does their own thing. That’s been a big part of what’s allowed me to take more creative approaches to my teaching, which is an aspect of my job that I’m very grateful for.

But these more creative approaches aren’t exactly useful if students don’t learn anything as a result. Hence: it’s time to talk about assessment.

I’ve mentioned before that part of the reason the ACRL Standards focused on basic research skills was because those are the things we can assess. It’s much easier to assess whether a student can successfully identify a scholarly source in a library database than it is to assess a change in their way of thinking. How do you measure something like that?

Of course, this is a question we’ve all been struggling with to one degree or another since the advent of the ACRL Framework, which uses threshold concepts instead of learning outcomes. Threshold concepts are literally all about changing someone’s way of thinking.

Teaching about the contextual nature of research is in a large sense about changing the way students think about research. It’s asking them to recognize that the conventions and methods of research are going to be different depending on the context in which research is taking place. Not just disciplinary contexts, but contexts outside of academia as well.

No matter what context of research you’re working with, there are going to be skills involved. So one idea for assessing the contextual nature of research is to determine what the skills associated with the context(s) you’re teaching are and assessing students’ ability to not only perform those skills but recognize the appropriate context for those skills. For example, if a student is searching for or citing a peer-reviewed source when you’ve asked them to perform the type of research associated with a non-scholarly or non-academic context, they’re showing that they have good research skills but that they’re not applying them to the correct context.

This is something that can be captured in a number of ways. You can observe a student’s information behavior to judge whether it’s appropriate to a given task. You can have the student create a research product and judge how well they show awareness of the conventions of a particular type of research. You can create a video that explains the conventions of a particular research context and then quiz students on their understanding of what they watched.

Of course, being able to judge whether students are using skills and following conventions appropriate to a particular context requires establishing what those appropriate skills and conventions even are. Not to mention establishing what the contexts of research might be.

In my own work, I’ve suggested a few very broad categories or “genres” of research, including academic, scholarly, personal, professional, scientific, and creative research. I even outlined some of the characteristics of these genres in my article introducing these ideas. But this outline was meant to illustrate a point rather than act as a guide. Clearly, more work needs to be done here.

But that doesn’t mean you can’t teach the contextual nature of research until that work is done.

In my own classes, I have quizzes that students take after reading or listening to a lecture that I’ve written on a given topic (it’s an online class). These lectures address the contextual nature of research in mostly general terms and I test students’ understanding of this concept by including questions like the following on the associated quiz:

What type of research are scholarly, peer-reviewed articles most appropriate for?

  • Academic/scholarly research
  • Personal research
  • Professional research
  • Creative research
  • All research, no matter the context

 

It’s a simple question that tells me a lot about how much students understand about this concept even without a lot of specifics about the conventions of each type of research. Students who get it right have shown me what they’ve learned. Students who get it wrong—like the surprising number who try to argue that peer-reviewed sources should be considered appropriate for all types of research because their other professors have always told them that they are the “gold standard” of credibility even after I’ve told them all the reasons this isn’t actually the case—show me that there’s still a ways to go before they cross that threshold of understanding.

I also had an experience recently where I participated on a committee whose charge was to create and implement a library research award for undergraduate students. As part of that work, the committee had to come up with a way to evaluate the work we were seeing, which could come from any discipline being studied on campus. We wanted to make sure the award process was open not just to students who had completed standard research papers but also those who had done research in connection to more creative projects and we needed a rubric to reflect that.

We ended up adapting a rubric (with permission) from one that had been used by several other institutions. But where the original rubric mentioned skills appropriate to a particular discipline, we substituted the phrase “appropriate to the context.”  That might seem like a small change, but not all research takes place within an academic discipline. We also wanted to make sure that students who had conducted their research in more creative contexts knew that they were eligible for the award as well. Either way, the wording is a way to capture that an excellent research project is one in which the student applies skills and conventions appropriate to the context of the research.

So there’s not as much concrete information about assessment here as I would like. Like I said, assessment tends to be a little under my radar for a variety of reasons but this is something I’m going to continue to think about and share some thoughts on in the future. If anyone else has thoughts, I’d be interested in hearing those as well.

 

 

 

The Annotated Bibliography as an Establishing Shot: Part 2

So I realize there’s a lot of chaos and confusion going on for a lot of people right now. I’m hoping to write a post later this week about how the coronavirus is affecting things for me and my library but before we get to that, I did promise that I would talk about how things went with the reflection piece of the “establishing shot” annotated bibliography project I wrote about last week went. So this is that.

Like I said before, the purpose of the “establishing shot” annotated bibliography was twofold. First, it helped me understand where the students were at with their research skills before they’d received much or any instruction from me. Second, completing the annotated bibliography at the start meant that it could then be used as a tool for reflection at the end. Students could look back on it and comment on how they had grown as researchers since the beginning of the course.

Just like with the annotated bibliography, I was super apprehensive about the reflection piece, mostly because a big chunk of the students’ grades would be riding on it and I didn’t want to receive the same kinds of rote responses I had so often seen in the past when I asked students to reflect on their work. I really had no idea what I was going to get.

Friends, I was amazed.

Read More »

The annotated bibliography as an establishing shot: Part 1

A while back, I wrote a post about the article “Documenting and Discovering Learning: Reimagining the Work of the Literacy Narrative” by Julie Lindquist and Bump Halbritter. In this article, Lindquist and Halbritter discuss their use of the narrative essay as an “establishing shot” at the beginning of their composition course and how this helped them get a sense of students’ writing skills before they’re received much writing instruction. They then used the narrative essay as an artifact for students to reflect on at the end of the course.

This article inspired me to wonder what would happen if I used a similar strategy with the annotated bibliography assignment in my information literacy course. What if I put the annotated bibliography at the beginning of the course instead of at the end?

Well, I tried it out for the first time this quarter in my fully online, asynchronous course. This is the first in a two-part post on how things went. Today, I’m going to focus on the annotated bibliography piece. Next time, I’ll talk about the reflection.

Read More »

On NYT’s textbook story and the Our Virginia incident

Image by Olya Adamovich from Pixabay

This week, the New York Times published an article called “Two States. Eight Textbooks. Two American Stories”  exploring how textbooks in the state of Texas tell the story of United States history versus how textbooks from the state of California do it. If you know anything about the politics in either of those states, there are some predictable differences.

As the article mentions, this isn’t a new thing. In fact, until very recently, I used the controversy surrounding a fourth grade textbook in Virginia as a case study in my information literacy courses. In that case, the textbook in question (which was called Our Virginia) included a number of egregious historical errors, like one about how slaves fought for the Confederacy in the Civil War, something that’s not supported by historical evidence. When asked about the errors, the textbook author, who was not a historian, said that she based her writing on information she found on the internet. Information authored by a group called the Sons of Confederate Veterans.

This was 2010, so the main reaction at the time was basically everyone laughing at this author for basing the research for her book on something she found on the internet. No one seemed to consider the possibility that the issue might be more complicated than that.

As an information literacy case study, I’ve gotten a lot of mileage out of this example by asking students who they feel is most to blame for what happened: the textbook author, the publisher, the Board of Education that approved the book, or the Sons of Confederate Veterans for promoting a view of history that’s not supported by evidence. Their answers are revealing. Of course, a lot blame the author herself for not doing proper research. Others blame the publisher for not fact-checking thoroughly enough. Others feel that the Board of Education should have done more to vet the book before allowing it to be taught in classrooms.

Almost none blame the Sons of Confederate Veterans.

Actually, that’s not true. One student did.

Here’s the story.

Read More »

Off for the holidays

Image by PublicDomainPictures from Pixabay

I’m off for the holidays and won’t be posting any new content but I thought I’d pin a thing here highlighting some favorite past posts in case you missed them.

Thanks for reading and see you in the new year!

Teaching evaluating sources from a research-as-subject perspective

In my last post, I talked about how, like most instruction librarians, when I teach students how to evaluate information, I used to rely heavily on tools like the CRAAP test to help students through the process. In some cases, particularly one-shot sessions where there may not be time to teach students the nuances of this process, I still do. If nothing else, they are a handy way to help students learn that they should be thinking about the quality of the sources they are using. And they fit well on a handout.

The CRAAP test and similar tools were (arguably) a good fit for Standards-based teaching, where the evaluation of information was an explicit learning outcome. But even before we traded the Standards for the Framework, a lot of librarians were dissatisfied with these tools because they oversimplified the evaluation process. The CRAAP test in particular seemed to mostly only apply to internet sources, giving students the false impression that these were the only types of sources that needed to be evaluated. Plus, they didn’t stop students from just choosing whatever came up first in their list of search results, regardless of whether it was clearly biased or too old.

These criticisms are particularly true when you take into account the contextual nature of research, as the Framework does. Currency, for example, is important for some research topics, like those based on technology or science, but less so for others, like history and literature. And that’s just within academic and scholarly research. There are also likely to be differences in how information is evaluated in professional, creative, and scientific contexts as well.

While I don’t have much flexibility with one-shot sessions, I wanted to start thinking about how to adapt my usual lessons on evaluating information to also get students to think about the contextual nature of research. It turns out, it didn’t need much tweaking.

Here’s how the lesson works.

Read More »

Selected Resources: A Short History of CRAAP

You may not be aware of this, but every year the ACRL Instruction Section Teaching Methods Committee puts out two lists of Selected Resources, one focused on teaching methods and instructional design and the other focused on assessment. These lists feature articles and other materials that have been published the previous year that are worthy of note. It’s a resource that doesn’t get as much use or attention as it should, so I’ve decided to assign myself the homework of making my way through each item on last year’s lists and write about it here.

Today we’re all about “A Short History of CRAAP” by Mike Caulfield.

Disclosure: I am currently a member of the ACRL Instruction Section Teaching Methods Committee, which selects and evaluates materials for the Selected Resources lists. I played a role in the selection process and reviewed several of the items that ended up on the final list as part of that process.

Read More »

Research begins with curiosity

Image by Ronald Plett from Pixabay

When I give students a research project, the first thing I ask them to do is propose a set of three topics. For my freshman seminar students, the three topics should be related to a question they still have about college life. For my information literacy students, the focus is on their role as information creators. The reason I ask for three is partly to increase the chances that one of their ideas will be researchable. The second is to try to force some creativity. Coming up with one idea might be easy. Coming up with three? That takes a little more thinking.

Unfortunately, it doesn’t usually work. About 80-90% of the topics students propose are standard academic research topics, ones that they probably think will get them a good grade rather than things they are actually interested in pursuing. Not exactly on the level of topics that get banned because professors are tired of reading about them, but in that same vein.

A course instructor I once worked with had the same issue. She wanted her students to write about topics that interested them or that were fun for them. I tried to model this in the session I taught for her students by using an example research topic related to Doctor Who. But most of them were writing about things like the legalization of marijuana, video game violence, and whether college athletes should get paid. In other words, the kind of essay topics that typically show up on state tests.

Now, it could be that the students had some genuine interest in these topics but it was also obvious that these topics were not fun for them. This despite the fact that their professor and I both encouraged them to pick fun ideas.

Why the disconnect?

Read More »

Myths about research

Image by Fathromi Ramdlon from Pixabay

Early on in the information literacy course I teach each semester, I introduce students to a couple of common myths about research, things students commonly believe because of their experience with academic research. This includes things like “research is about finding the right answer” and “citation sucks” (which I tell them isn’t really a myth because, well, citation does suck).

Now that I’m spending some time thinking about the role of research in creative writing, I’m finding that there’s a whole other set of myths/beliefs that keep cropping up, ones that I hadn’t thought about or that don’t apply to the type of research I usually teach.

Read More »

The Contextual Nature of “Un-Research”

Image by David Mark from Pixabay

At this point, I’ve written a few things about the contextual nature of research and offered some thoughts on how to bring that idea into information literacy classrooms. I’ve also mentioned that my opportunities for changing my own teaching in the way I’m advocating for are somewhat limited at the moment.

Then I realized that some of these ideas actually have connections to something I tried in the past and wrote about in an article that was published in Communications in Information Literacy called “Teaching Information Literacy Through ‘Un-Research.’”

Read More »