Skip to content

Category: Assessment

Topics related to assessing teaching and learning

Revisiting Online Quizzes

The Teaching and Tools workshop series included two seminars with a tongue-in-cheek title “Beat the Cheat.” The first session was a broader exploration of the general premise of exams as an assessment tool (spoiler alert – Derek is an occasional skeptic), and the second session explored some of the Canvas features that allow for “security” measures when online quizzes are offered.

Feel free to take a listen to the Podcast versions here:

Part One podcast

Part Two podcast

You can also access the transcripts here:

Beat the Cheat part one transcript

Beat the Cheat part two transcript

And the handouts from the in-person workshops are available as well!

Beat the Cheat part one handout

Beat the Cheat part two handout


Comments closed

What Words Appear Most Frequently in Our Course Learning Objectives?

Here’s a fun way to spend a few minutes, assuming you’re the kind of person who enjoys looking at things like course learning objectives. (Is there anyone who doesn’t?)

This is a word cloud representation of the current course learning objectives for most of EvCC’s courses. This is generated using Voyant Tools, a online text analysis platform that can do all sorts of neat and sophisticated things with large quantities of text.

By default, the word cloud displays the most common words appearing in the collected course learning outcomes across all departments and divisions. You can move the Terms slider to display fewer or more words. If you’d like to look at the outcomes for a single course, click the Scale button, select the “Documents” option, and then choose the specific course you’re interested in.

I find this visualization interesting to think about in relation to Bloom’s Taxonomy of Educational Objectives (a nice web version can be found here). By removing a lot of the domain- and subject-specific words that often appear in learning objectives, the word cloud view illuminates some of the broader categories of learning our courses identify as essential to a student’s progress through a given course and program of study. Looking at these categories in terms of their position along Bloom’s spectrum of lower-to-higher-order thinking strikes me as productive and potentially revealing exercise: what should we make of the prominence of words like “demonstrate,” “describe,” and “identify” and the diminutive size of “analyze” and “create”?

Comments closed

A First Look at Equity in eLearning

As EvCC has continued its Guided Pathways efforts over the past year, equity has been frequently discussed as essential to  helping students make informed decisions about their education and future careers. In a post on the Guided Pathways blog last spring, Samantha Reed discussed some of the ways that increased awareness of equity considerations can help programs identify gaps in outcomes, thereby creating openings for change that will help us “make sure our institution serves all our students equitably.” More recently, Director of Institutional Research Sean Gehrke has been posting on using data to identify equity gaps. Equity was also a topic of discussion at the summer meeting of our system’s eLearning Council, where we noted as a clear priority the need for more research on “equity gaps in applying technology to learning” and “structural barriers to access to technology-mediated instruction.”

Prompted by some of these ongoing conversations, I decided to do a little initial investigating of my own to see where there might be obvious equity gaps in the context of eLearning at EvCC. The real work of examining equity is difficult and potentially requires multiple types of data in order to get meaningful analytical purchase on its many dimensions. So as a somewhat easier starting point, I posed a fairly simple question: “Are there significant differences between student populations in face-to-face and online courses at EvCC?” Granted, that’s probably a diversity question rather than an equity question–but it creates necessary space for considering those more challenging equity issues in online learning. Once we have a better sense of who might be missing from online courses, we can take up the questions of why they’re missing and how their absence may by symptomatic of systemic inequities.

To answer my question, I turned to our institutional Enrollment, Headcounts, and FTE Tableau dashboard (thank you, Sean!) and starting crunching some numbers.

Comments closed

Incorporating Metacognition Practices Mid-Quarter

Image result for metacognition definition

At a recent conference on departmental support of evidence-based teaching practices in Biology (PULSE), I picked up two Metacognition techniques to bring into my classrooms. These seemed so powerful and, honestly, easy to implement, that I did it the following week.

This first idea stems from work that Ricky Dooley (new colleague in Biology) developed with Scott Freeman and others at the University of Washington. In my majors’ Biology class, I have weekly quizzes over the past week’s material.  Standard in-class quizzes, mostly multiple choice (taken with iClickers) with a short answer question here and there. Student performance was mixed, and when we went over the correct answers, many students had “ah-ha” moments when ideas began to click.

Of course, these ah-ha moments were a few moments too late to help on that particular quiz. What I’ve begun doing is flipping that around. First off, I’ve moved this quiz completely onto Canvas. And rather than the usual 10 questions/10 points, they are now 20 questions, still worth 10 points. The first question is the usual question I would ask (although I’ve added more short-answer questions, reflecting questions I will ask on the exams.). This first question (and all of the odd-numbered questions) are worth zero points, so there’s no risk to the student to do their best from their memory (no reason to cheat). The second question (all of the even-numbered questions) is the same question, followed by how I would answer the question. This question then asks the student if they think they got it right, wrong, or somewhere in between. If they didn’t get it right, I ask them 1) explain why they got it wrong, 2) what the right answer is, and 3) why is the right answer correct. This question is worth 1 point, and I grade it based upon how they’ve reflected on their work. Sometimes, within their summary explanations, students will still not fully understand the material. Here, it’s very easy for me to jump in (while grading) and help them individually. An additional benefit is that these quizzes, with the addition of more short-answer questions, more closely resembles the question types I have on my midterms.

The first time I did this (in the 5th week of this quarter), my last question asked the students their opinion on this new style of testing. With the exception of the one student who was already doing exceptionally well, feedback was very positive. They appreciated the ability to correct themselves, and feel that they better understand the material. Their explanations seemed genuine to me, so I’m hopeful that they’ll perform better on our midterms.Image result for metacognition

The second idea I implemented I borrowed from another biology colleague, Hillary Kemp. This I’ve done with my non-majors Cellular Biology course, one that is typically tough for many students, as they begin their path towards an allied health degree.  Exam performance on my short-answer questions is always spotty (lots of higher-order Bloom’s Taxonomy questions).  Usually I would go over the correct answer with the class, in the hopes that they’d do better on the final. Now, rather than go over those answers, I give them their marked-up short-answer sections back, and let them correct their answers for partial credit. I stress that in their corrections I’m looking for them to explain why they got it wrong, and why the correct answer is correct. This is worth just enough to eliminate the need to curve the exam (essentially, they’re working to “earn” the curved points). In my large class (n=48), results were mixed. Many students clearly explained why they got it wrong and understand why the correct answer is correct. However, others just put down correct answers or, worse, Googled the answer and put down technically correct answers, well above the level of our course. Again, I awarded points based upon their explanations rather than the correctness of their answers. I think this exam reflection is helping those students who genuinely want to do well in class, as opposed to those who are maybe not too sure about this degree path. I’m hopeful that performance on our comprehensive final will show improvement because of this reflection exercise.

This post was generously contributed by Jeff Fennell, who teaches in the Biology department at Everett Community College.

Comments closed

Friday Fun with Core Learning Outcomes

As our college has been gearing up for its accreditation site visit, which happens next week, I’ve been thinking quite a bit about our seven Core Learning Outcomes (CLOs). Naturally, they play a fairly large role in the self-evaluation report that we’re submitting to the accreditors, so that’s one reason they’ve been on my mind. But I’ve also been thinking about connections among those college-wide outcomes, and how those connections inform in various ways EvCC’s ongoing Guided Pathways work.

Noodling about on that topic recently, I found myself wanting to visualize how many CLO connections there actually among the courses we offer. In particular, I wondered whether there might be certain clusters of courses that all support the same CLOs, even if the courses themselves are part of different departments, programs, or degree/certificate pathways. Here’s the visualization I came up with:

To get a sense of how courses in different divisions are connected to one another via shared CLOs, click on the name of a division you’re interested in. This will highlight all courses within that division. You can also click and drag any of the nodes representing an individual CLO; this will lock in place wherever you release it, which can make it a bit easier to see which courses are clustered around specific outcomes. Hover your cursor over an individual course to reveal the specific CLOs it introduces. (A larger view is also available.)

Do you see any interesting patterns in the network? Are there any groupings of courses you might not expect? How might visualizations like this help us see new connections or patterns that could help us approach our Guided Pathways efforts–particularly the process of developing pathways and program maps–with fresh ideas or insight into possible points of intersection across our college’s academic divisions?

Comments closed

Mid-Quarter Feedback

Student voice matters.

I think there is no way we can dispute this. If we wait until the end of the quarter and expect students to provide valuable information on their learning experiences in our class, chances are the disgruntled students (and aren’t there always a few?) will let us know what went wrong. Why don’t students tell us earlier if they want changes made in the course (and I’m talking about classroom activities, not course content)?

Because we didn’t ask them.

Mid-quarter check-ins are perfect opportunities to get feedback on how things are going. You may be familiar with my all-time favorite, PLUS/DELTA. A simple grid with 4 spaces for students to reflect on not only what the teacher is doing to help them learn (PLUS) and what the teacher can change to help learning (DELTA), it also includes spaces for students to identify their own behaviors that are helpful and those that should be improved upon.

The most important part of this process is reading and reviewing the anonymous (and I believe it should be anonymous) feedback from students, and then responding, closing the feedback loop.

Here’s the story of the first time I used PLUS/DELTA: I gave each student in the class a copy of the grid and assigned it as a reflective assignment for that evening. The following class period I asked students to team up in groups of 3-5 and to look for trends in the areas related to me, the teacher. I gave each small group another copy of the grid, and in about 10 minutes each group had 3-5 items in both columns, PLUS and DELTA. That night I read over the small stack of papers – only 1 from each team – and formulated my response. It started something like this: “Thank you all for your thoughtful responses. Let me share with you the things that you’d like me to continue doing in the class that are helping you to learn (and I put that list on the teaching station to share). Now let me share with you the things you’ve suggested I change (and I put that list on the teaching station). As you know, I can’t stop giving exams, but I can change the day of the week.” And so on. Interestingly, the class became much more engaged after that exercise! Before the end-of-quarter evaluations that quarter, I reminded students that their feedback helped make this a better class and me a better teacher, and I reminded them of the changes that were made because of their feedback. The next quarter when I was reviewing my IDEA results, I was pleased to read this student comment: “No one ever asked me before how I would change the class. Thank you!”

If you want to get feedback on a more regular basis, here’s one I found by a copy machine recently (I can’t give credit because there was no information on the handout!)

Directions: Please fill out one or both squares and drop in the basket up front before you leave. NO NAMES PLEASE! This is anonymous!


This week, what part of the lesson, or what point is still a bit unclear to you? What are you struggling with? And what could I, your instructor, have done/can do to make your learning easier?


This week what part of the lesson, or what point, was finally made clear to you? What was your “ah ha!” moment? And/or what did YOU do this week that made your learning easier?


Do you have favorite anonymous feedback examples you’d like to share? Let us know!

Comments closed

Grading anonymously can help counteract implicit bias

The first time I ever taught a college course, I spent an enormous amount of time grading my students’ essays–many, many hours reading, re-reading, and commenting on their work. Part of the reason grading papers took me so long was obviously my inexperience. I had yet to discover the many small efficiencies that can help speed up the process of evaluating students’ written work. And, of course, being new to teaching I was especially anxious about proving I could do a good job and provide my students with the kind of detailed, constructive feedback that I had received from the teachers who had most influenced and helped me in the past.

stacks of composition notebooks
Stacks and stacks and stacks of grading. Grading by ninniane licensed under Creative Commons.

But my slowness resulted from another factor as well. I wouldn’t have been able to articulate it at the time, but I now see I was filled with a vague sense that I might, without knowing it, be unfair in the comments I provided and the grades I assigned. How could I possibly be sure that a student’s tardiness the day before wasn’t subtly affecting how I was reading her essay now? How could I know that I wasn’t thinking about another student’s evident lack of preparation for a class presentation the previous week as I deemed his current work worthy of only a ‘C’? Could I truly guard against all of the various ways in which a student’s appearance or behavior might affect my judgment, even though I knew those things had nothing to do with work I was evaluating now?

Comments closed

Formative Assessment Podcast

A new podcast on Formative Assessment is now available for your viewing pleasure.  If you prefer text, access this Formative Assessment transcript.  This episode is a condensed version of a Teaching and Tools workshop held April 19, 2017 at Everett Community College.  The EvCC eLearning webpage includes a list of upcoming events and other workshops that may be of interest to you.  Check back for the latest developments and announcements!

Comments closed

Does feedback enhance learning?

Each month I meet with a group of faculty in the New Faculty Academy to discuss chapters in the book How Learning Works: 7 research-based principles for smart teaching (Susan Ambrose, Michael Bridges, Michele DiPietro, Marsha Lovette, Marie Norman). Our discussions allow us to do a deep dive into our classroom practices, and because the faculty come from different disciplines and teaching experiences, there are always rich conversations that enhance our own learning.

Recently we discussed the chapter “What kinds of practice and feedback enhance learning?” There are 2 kinds of feedback that we tend to give students: formative and summative. Many of us rely solely on summative feedback, such as exams, projects, or papers. I recently heard someone describe the case of relying on exams alone as doing an “autopsy” on student work. Formative assessment is a way, as the authors state, to focus on ways to help students “work smarter.” The authors believe that “feedback plays” a critical role in “keeping learners’ practice moving toward improvement.”  Formative feedback that you might be familiar with would include things like the minute paper or the PLUS/DELTA mid-quarter feedback. Both of these address student learning and allow the instructor to make changes in their classroom practice in real time. Using the minute paper at the end of a class asking students to summarize the main ideas of the day allows an instructor to reflect on whether they were successful in getting those main ideas across. A PLUS/DELTA mid-quarter assessment allows students to comment on what’s working in the class to promote their learning (PLUS) and what changes they’d like to see in the class to help them learn better (DELTA).

Here is a short list of the strategies that the authors suggest for targeted feedback:

  • Look for patterns of errors in student work
  • Prioritize your feedback
  • Provide feedback at the group level
  • Design frequent opportunities to give feedback
  • Require students to specify how they used feedback in subsequent work

Of these, I am most interested in knowing whether you have ever used the last type of feedback. Would this be valuable to you, the instructor, and to students? Do you think it would help students to connect the dots between different assignments?

If you’d like to learn more about other kinds of formative assessments, please connect with me!


Comments closed

Solve a Teaching Problem

I recently had the opportunity to spend some quality time with one of my teaching and learning heroes, Todd Zakrajsek. Todd is at the University of North Carolina at Chapel Hill, where he is an Associate Research Professor and Associate Director of Fellowship Programs in the Department of Family Medicine. I met him several years ago at a multiple day professional development workshop – when I was first finding my way in the Pro-D world – and was immediately impressed by his approach to working with faculty. Just today I received an email from The Scholarly Teacher (one of the T&L blogs I read) and the latest post is by Todd: Students Who Don’t Participate in Class Discussions: They Are Not All Introverts. Take a look – it’s worth your time!

Let me get back to an idea that I have been reflecting on since my discussion with Todd: the Threshold Concept. Here’s how he explains it: many faculty are married to the idea of “I have to cover my content! I don’t have time for active learning because everything is important! I am a content expert! I can get through it only if I lecture!” Other faculty believe that lecture should be punctuated with activities. After 10 minutes of lecture about (name your topic) you should do something active to break up the lecture.  It’s something we do to keep students’ attention. Todd suggests that many people, especially our students, have a cognitive capacity that is reached in about 12 minutes. Isn’t that a perfect time to do “some” activity? Yes! But make sure that the activity is designed to solidify that content. Remember, he says, our job is not to cover the material – it’s to uncover the material for students. I know that sounds a bit like it’s coming straight out of a book by one of those educational consultants (believe me, I have plenty of them on my bookshelf!) Pause here for a moment – ask yourself, what do students really need us for? We ask them to purchase (very) expensive textbooks, and most, if not all of the material we cover in a course comes right from that text. Can’t a student who is motivated enough just read the text and “learn” the material?

Here are some questions for you to consider (think of this as your homework assignment):

Q1: Your expertise is important, and you were hired to teach in that discipline. If you do not have a background in education, how do you learn how to navigate a class (i.e. how to teach)?

Q2: How do you respond to the question, “What do students really need you for?”

Q3: Can anyone teach?

Q4: Where do you go to solve teaching problems? How do you recognize when you HAVE a teaching problem?

Q5: Interested in learning more about how to identify and solve a teaching problem? Check out The Eberly Center for Teaching Excellence and Educational Innovation.

Share your comments and ideas!

Comments closed