How do you deal with the creeping suspicion that clickers and your classroom just don’t have any chemistry?
I am a little troubled by a recent article entitled Why I’m Giving Up On Clickers, posted on the blog of Prof-Like Substance (PLS), a junior faculty member at a US university. PLS writes about the good, the bad and the NSF, all with wry humour and honest insight; I’ve followed and appreciated this glimpse into the life of an academic scientist for several years.
What I’ve enjoyed most is PLS’s development from “rookie” to experienced university teacher and his willingness (fearlessness?) to explore and adopt different methods if it will improve student learning. What bothers me about his recent post is the potential for his current negative experience to dissuade him from exploring the many benefits of clickers (or similar tools) in future courses.
On the subject of clickers* I’ll say straight away, if I was in PLS’s shoes right now—teaching a 30-student, upper-level course, lots of buggy issues with the technology—I’d stop using clickers too. For that particular course. Clickers are a means to an end, a tool, used to improve student engagement and facilitate on-task peer discussion (one form is Peer Instruction) all with the ultimate goal of getting students to learn while they’re in the classroom. Clickers can be awesome for this1,2,3,4, but if a tool isn’t doing its job and can’t be fixed easily, use another tool.
* A “clicker” is a little hand-held device students use to answer questions, usually multiple-choice questions, but the technology is changing rapidly. If you’re new to clickers, check out this awesome handbook.
But say you do want to fix the tool? Helping faculty learn and hone (and trouble-shoot) active-learning techniques was one of the great parts of being a Science Teaching and Learning Fellow at UBC. I don’t work with PLS, but if I did, I’d start by trying to identify why his clicker questions aren’t working. In my experience, the issues can be usually be traced back to either:
- an issue with the timing or delivery of a question in class, or
- an issue with the question itself.
I think PLS brings up four important problems in his article that can be alleviated somewhat if we break them down:
Problem The First: “…the students aren’t engaged by this method. They’ll dutifully do it, but they either get the right or wrong answer and move on.”
My Response: As an STLF, the first thing I always wanted to know was “What was your goal for this question?” (more on goals and writing questions in Problem The Second). In this case, it sounds like PLS wanted his question to stimulate discussion and it didn’t. Peer discussion is probably the most common goal for clickers and in an ideal situation can be used to get*:
- students teaching each other while they may still hold or remember their novice misconceptions and difficulties.
- students discussing concepts in their own vocabulary.
- you, the teacher, information about what students know (and don’t know) in time for you to react in class.
* The list above, as well as everything else I know about clickers, came from a colleague and peer instruction wizard, Peter Newbury.
I won’t lie, it takes skill to execute an effective peer discussion episode in class, but like any skill it gets easier with practice. The way you deliver the question in class is important (e.g., when in class, how many times students vote, how much instruction you give). Here are some great resources specific to honing your clicker “choreography”:
- Ready, Set, React! Two-page, step-by-step flowchart of what to do during class to create an effective peer discussion episode, written by Peter Newbury and Cynthia Heiner while part of the CWSEI at UBC.
- Effective Facilitation of Clickers. Workshop materials by University of Colorado Science Education Initiative (CU-SEI). There is also a video of this workshop.
- Best Practices for Running Peer Instruction with Clickers. Workshop slides by Peter Newbury of University of California, San Diego.
The other thing you need is improve a skill is feedback. Ask someone (e.g., fellow faculty, TA, perhaps someone from your Teaching and Learning Center) to observe your class, paying attention to your behaviour as well as the students. This protocol has a list of behaviours that may be helpful in characterizing your class.
Problem The Second: “The information I get isn’t as helpful as I had hoped. And this one might be on me, but I find that pretty much every question has about a 75% correct response rate.”
My Response: Again, here I wonder what you wanted to achieve with the question?
Clickers can serve many purposes, from provoking thinking before class begins to practicing exam-style questions to conducting a quick end-of-class survey. A good question is hard to write so have a goal in mind before investing your precious time. My suggestion: build most of your questions around learning goals and/or key concepts covered in the class, and try trolling old tests for persistently-difficult concepts and use their common incorrect answers as distractor answers in your question (a GREAT way to reveal student thinking).
I empathize with PLS’s disappointment about a 75% correct response rate, especially if he spent a long time crafting the question and expected it to really challenge students. If 75% were correct on the first vote I wouldn’t ask students to “turn to your neighbour and convince them your answer is correct”—this would likely not lead to a useful discussion. But the question data is still informative.
The brilliance of clickers is in making student thinking visible, in hard numbers. Here you know that 1 in 4 students got the answer wrong. As a colleague at UBC put it, “Isn’t knowing that 25% of the students don’t know the answer better than thinking everyone knows the answer because 100% of the 3 people who put up their hand to answer it in a shout-out do?” This is relevant for a class of 30 or a class of 300.
So we know most of PLS’s students got this concept (yay!). Next term, I’d either use this question as a quick concept-check—i.e., change my goal and expectation of student performance—or drop it for a more challenging question meant to stimulate peer discussion.
Here are some resources to help you identify your goals and write effective questions:
- Creating Effective Clicker Questions in Life Sciences. A workshop I put on with Warren Code for biology faculty at UBC.
- Writing Good Peer Instruction Questions. Workshop slides by Peter Newbury of University of California, San Diego.
- Writing Great Clicker Questions. Blog post and workshop by Stephanie Chasteen (a.k.a. sciencegeekgirl). There is also a video of the workshop.
Problem The Third: “On the rare occasion I find the class response to be closer to 50% correct, I already knew they were having trouble with a concept. Using the clickers to merely confirm what I can see on their faces is probably not the best use of technology.”
My Response: I sound like a broken record here, but, goal? A 50% response rate can be exciting—the perfect time for peer discussion!—but PLS sounds disappointed. Makes me wonder what his expectations were and how he reacted at the time: Did students discuss and vote, or just vote individually? Did PLS spend additional class time reviewing this clearly difficult concept?
I wish I could have been there to help him use clickers beyond “merely confirm[ing] what I can see on their faces” to identifying (and celebrating) the parts of his teaching that improved student understanding.
Problem The Fourth: “…They [clickers] are too damn expensive. …only a small fraction of the students have the right model that works with the new software, so probably 60-70% of the class ends up buying them for my class.”
My response: Unfortunately, here I’m not much help. I got to UBC in January 2012, well after the Faculty of Science officially adopted clickers. For the most part—there are exceptions—the tech side of things runs smoothly and clickers are used in every core course I can think of. I can say that cultural change is slow but does eventually happen: many students in courses I worked with expected clicker questions (or other active learning techniques) and complained when a class was straight lecture. However, if the technology is problematic, frustrating and not widely-adopted by your department, as it is with PLS right now, I’d drop it too. There are other tools to poll your students (Poll Everywhere, for example).
I’ve gone on long enough though there is still plenty to discuss, like how awesome clicker data can be when reflecting on your teaching. As I said in the beginning, I admire PLS’s dedication to teaching and I hope this post might convince him to give clickers—a potentially powerful and sophisticated teaching tool—another chance. But not until it gets its act straight.
* Just in case you’ve been wondering this whole time if maybe Prof-Like Substance struggles with clickers because he’s just not that tech-saavy—he live-tweeted his vasectomy. He’s tech-saavy.
Papers cited:
- Why Peer Discussion Improves Student Performance on In-Class Concept Questions. 2009. Michelle Smith, William Wood, Wendy Adams, Carl Wieman, Jenny Knight, Nancy Guild, and Tin Tin Su, Science 323 (5910) 122-124. paper
- Clickers in the Large Classroom: Current Research and Best-Practice Tips. 2007. Jane Caldwell, CBE—Life Sciences Education 6(1): 9-20. paper
- Audience Response Systems in higher education courses: A critical review of the literature. 2013. Karly May. International Journal of Instructional Technology and Distance Learning 10(5): 19-34. paper
- Listening to student conversations during clicker questions: What you have not heard might surprise you! 2011. Mark James and Shannon Willoughby, American Journal of Physics 79(1): 123-132. paper summary
Leave a Reply