Skip to content

Bring Them or Ban Them? Laptops (and Mobile Devices) in the Classroom

In the list of perennial ‘controversies’ at the intersection of teaching and technology, the lowly laptop computer has always played something of an outsized role. I’m old enough to remember a time when the laptop’s extreme portability was breathlessly heralded as something that would revolutionize how and where learning would take place. (“It only weighs eight pounds; ten if you include the charger! Now students can read, conduct research, or write papers anywhere and everywhere! The era of ubiquitous learning has arrived!”) I also remember some of the dire predictions that were lobbed back in response. (“Students will be endlessly distracted! They will use their computers as intellectual crutches instead of learning to think and do for themselves! The end of deep, focused learning has arrived! Besides, what’s wrong with going to the computer lab — or using a typewriter, for that matter?! “)

I exaggerate, of course, but some version of those opposing views, updated a bit for our smartphone- and tablet-influenced times, is probably familiar to anyone who has spent much time in higher education over the past decade. And, just as it did twenty years ago, the apparent conflict between those two views continues to bubble up in contexts that often have little to do with how people actually learn or with any of the various technologies, like laptops, that are now widely available.

Case in point: a few years ago I attended a lunchtime workshop that was advertised as offering an overview of the cognitive science and neuroscience research on multitasking and its effects on learning. Before we started, the facilitators asked all of us to introduce ourselves with a few words about our interest in the topic and what we hoped to learn during the session. As we went around the room, I was increasingly taken aback as every single person gave some version of the same statement: they wanted to know whether laptops, and technology more generally, should be banned outright in classrooms.

An illustration of three rows of seated students, with the 'cone of distraction' formula superimposed
When laptops distract! Kyle Bowen’s ‘cone of distraction’ formula.

That experience was my first inkling that for many people the concept of multitasking is inextricably connected to the use of computers. I had attended that workshop with the assumption that we’d be focusing on the cognitive mechanisms of attention and distraction, and the effects of these cognitive processes on learning; for me, this was entirely unrelated to the presence or absence of technology. Yet my fellow participants had looked at that same topic and arrived at the conclusion that “multitasking” was, apparently, synonymous with “using a computer in class.” In my understanding, lack of attention was a state that might be caused by any number of things (including various external distractors like laptops, to be sure, but also a range of individual and internal factors). To them, lack of attention in class seemed to have just one obvious cause: the laptop computer.

I’ve possibly tipped my hand in relating that anecdote, such that it’s now obvious on which side of the laptops-in-the-classroom debate I usually find myself. But my point is not to make the case either for or against. I’m more interested in how our expectations, assumptions, and individual experiences shape the positions we take in such debates. Technology, broadly speaking, tends to have a distorting effect on any conversation about teaching, such that those with certain predispositions and beliefs about it often see technology as either naturally facilitating and enhancing, or inevitably inhibiting and undermining, learning — regardless of the affordances of any particular technology in a given set of circumstances. For those people who are already favorably inclined toward technology (again, broadly speaking), it’s at worst neutral; at its best, it’s an indisputable positive force in the world. For those who tend to be more dubious about technology, however, it may occasionally be benign — but it is much more often threatening, a force of distraction and disruption. Neither view has much to do with how technologies are actually used, and how they affect people, in classroom settings. Consequently, one inherent danger in any discussion of laptops in the classroom is that it can quickly devolve into an evidence-free trading of personal anecdotes chosen because they appear to justify our chosen position: confirmation bias at its best (or is that worst?).

To that end, I like to revisit the topic periodically, to see what people have recently said on the subject and then attempt to weigh for myself the evidence they’ve presented for or against their preferred response to the question of allowing or banning laptops (or any other technology, for that matter). That doesn’t get around the confirmation bias problem, of course, or mean that I’m more receptive to the opposing view than I might otherwise be. But it does require me at least to acknowledge new voices weighing in on the subject and, I hope, to weigh new evidence that might may cause me to reevaluate some of my own views.

So, in that spirit, I’ll share with you a few commentaries I found particularly illuminating when I last surveyed the topic at the end of last year. All are brief and well worth reading in full. The first, by Susan Dynarski, was published in the New York Times: “Laptops are Great. But Not During a Lecture or a Meeting” (Nov. 22, 2017). She writes,

[A] growing body of evidence shows that over all, college students learn less when they use computers or tablets during lectures. They also tend to earn worse grades. The research is unequivocal: Laptops distract from learning, both for users and for those around them. It’s not much of a leap to expect that electronics also undermine learning in high school classrooms or that they hurt productivity in meetings in all kinds of workplaces.

In response, Matthew Numer’s counterargument in the Chronicle of Higher Education“Don’t Insult Your Class by Banning Laptops”, (Dec. 4, 2017), focused on the goal of letting students make their own choices about technology use:

Students should be insulted. Telling them they can’t use their laptops or smartphones in class is treating adults like infants. Our students are capable of making their own choices, and if they choose to check Snapchat instead of listening to your lecture, then that’s their loss. Besides, it’s my responsibility to ensure that my lecture is compelling. If my students aren’t paying attention, if they’re distracted, that’s on me.

Based on a study he conducted, Numer goes on to conclude, “Laptops, turns out, aren’t evil…These technologies promote collaboration and make it easier. They are both research and study tools.”

Others weighed in as well. On his personal blog, Mark Sample posted a list of ten ways he and his students have used laptops in courses he teaches (“Ten Things We Did with Laptops in Class Instead of Banning Them”, Dec. 13, 2017). Chasing down some of the ideas in that post led me, in turn, to Cathy Davidson’s post for HASTAC, “10 (Even More Basic) Things We Did With Laptops In Class Instead of Banning Them” (Dec. 13, 2017). Among the highlights:

This fundamental “contribution to public knowledge” and continual knowledge sharing with our class is the baseline reason for laptops used (not necessarily owned) by students in my classrooms.

I neither require nor ban laptops. I never use any technology (including index cards and pencils — or books or articles) without having students discuss the affordances of that particular technology. When I do use laptops in my classrooms, I make sure that it is for a purpose and that everyone in the class participates for that purpose. [Original emphasis]

What strikes me in particular about Davidson’s comment quoted above is its emphasis on the pedagogical considerations underpinning any choice about technology — above all, the opportunity always embedded in technology’s use to help students consider their own learning process and evaluate for themselves the tools contribute to it or detract from it.

Oh, and if you’re curious about the key findings from some of that research on multitasking I mentioned earlier — well, it turns out:

  1. There is really no such thing (no one truly multitasks in any meaningful sense; we actually just switch rapidly between discrete tasks)
  2. Multitasking has a high cognitive cost and leads to greater inefficiency, and a higher rate of errors, in completing each of the individual tasks involved

Featured image University of Missouri School of Journalism by Brett Jordan, licensed under CC-BY 2.0