Skip to content

Category: Assessment

Topics related to assessing teaching and learning

More on Student Generated Exam Questions

Who writes “better” exam questions? In an earlier blog post, I talked about student generated exam questions. Shriram Krishnamurthi, a professor of computer science at Brown University, said the approach is an example of what he called “Contributed Student Pedagogy.” Let’s add Benjamin Wiles, the chief data officer at Clemson University, to the conversation. Are you familiar with “Self-Determination Theory?” According to the website verywellmind, “In psychology, self-determination is an important concept that refers to each person’s ability to make choices and manage their own life. Self-determination allows people to feel that they have control over their choices and lives. It also has an impact on motivation—people feel more motivated to take action when they feel that what they do will have an effect on the outcome.” Will student generated exam questions foster engagement? Will greater student learning take place?

Wiles has made use of it in some of his math courses, and he knows that some students may be a bit hesitant at first, but believes that this is an excellent way to build community and engagement from the very first day of the term. In fact, he asks students to help write the syllabus with him.

In a study at the University of Michigan a few years ago, published in 2015 in the Journal of Dental Education, the rigor of exam questions generated by students was compared to the questions generated by instructors. The students were members of a first-year dentistry course. Here’s where it gets interesting…three experts did a blind study of exam questions, using Bloom’s Taxonomy as a scale, and discovered that almost half of students’ questions assessed high-level cognitive skills, and 16 percent of the instructors’ questions did. That gave me pause!

The study also interviewed students about this exercise (not the instructors, though!) More than three-quarters of the students found this exercise helpful in making connections between the dentistry course and other courses they were taking! “This exercise forced me to evaluate questions and review why they were right and wrong,” one student wrote.

Wouldn’t this be an interesting exercise for you to do in one of your classes this fall? Remember that students don’t automatically write great questions. Like the students in Wiles’ classes, it may take some coaxing and coaching, and an introduction to Bloom’s Taxonomy, but the outcomes will help students make more and better connections both in your class and other courses.

Comments closed

Quick Tip #4 – Rounding Grades

What’s your practice when it comes to rounding student grades? Thumbs up or thumbs down?

Back in April of 2019, Megan Von Bergen wrote in Faculty FocusAlthough some students need a “second lap” to master academic skills needed for later coursework, repeating courses makes it harder for students to progress toward a degree. Time is money (literally, in higher education), and when students are asked to spend more of both on a class they already took, they may get discouraged or drop out.”

What are some of the questions you might ask when making a decision about rounding a grade up or down? Here are the recommendations for how to think about this from Von Bergen:

How did students perform on important assignments?

Suppose a student in your class misses some minor assignments, submits some assignments late, and who has a spotty attendance record. But this student has done really well in major or important assignments. In fact, the student has done quite well on all the major exams. Has this student demonstrated learning sufficiently? Are you willing to pass this student?

Did the student improve over the course of the semester?

Let’s say that you have a student who had a slow start in your class. Maybe they were unprepared for the level of rigor or weren’t quite ready for a college level class. So the first 2 weeks they struggled and maybe stumbled on a few assignments, and their first exam grade was not so great. Then 2-3 weeks into the term they began putting the pieces together and their performance on assignments and exams steadily improved. Compare the first assignments with the last – this student went from a failing grade to a solid B. Has this student demonstrated learning sufficiently to earn a B for the course?

Did the student meet the course objectives?

Von Bergen writes, “Course objectives are the finish line of a race: like a marathoner who leaves the course at Mile 20, a student who does not reach the objectives has not fully completed the course. If a student falls short of a significant number of the objectives, she should retake the course, so that she has the opportunity to acquire important skills.” Some of the courses you teach may have a long list of objectives. Are they equally important? There is a student in your class who has adequately demonstrated that she has met most of the objectives. How many of the objectives are students in your class required to meet? All of them? Most of them? Which ones really matter, especially for later coursework? Has the student who meets “most” of the objectives adequately demonstrated sufficient learning to earn a passing grade?

Conclusion

Von Bergen writes, “Even with assessment tools such as rubrics, grading is unavoidably contingent; any final decision always depends upon the individual student’s situation.” As instructors we tend to view all student work as complete, or not, correct, or not, rather than in a more holistic way. Some will argue that the work of teaching is to cover the material (and while I agree that this is true), let’s look at grading in a more holistic way.

Want to read more about grading?

Check out this Faculty Focus article, Grading and Chaos Theory: Frustrations and Exhilarations of Parsing Motivating Factors…”The academic freedom to evolve a philosophy of grading, thanks to tradition and the American Association of University Professors (AAUP) encouragement of which, “The assessment of student academic performance…is a direct corollary of the instructor’s ‘freedom in the classroom…,’” provides a path of exploration regarding what, why, and how I grade.”

Here’s another recent article from The Scholarly Teacher, Grading as Instruction: Designing for Practice by Barry Sharpe. “There is much discussion about and research supporting the importance of formative assessments for student learning (Fisher & Bandy, 2019). I worry, however, that in practice, some formative assessments end up functioning more like summative assessments for students.”

Comments closed

Contributed Student Pedagogy

It’s the end of the quarter. We’re getting ready for final exams. Many of you have spent a great deal of time thinking about and writing questions and problems for that exam, knowing that this is an opportunity to bring together everything your class covered in the past 10 weeks.

Shriram Krishnamurthi, a professor of computer science at Brown University asks students to contribute possible exam questions. He says that this is an example of what he calls “Contributed Student Pedagogy.” This technique is not to make his life easier. Instead, he uses these questions to develop concept inventories.

According to Exploring How Students Learn, “A concept inventory is a test to assess students’ conceptual understanding in a subject area.” Krishnamurthi says, “This is a very lightweight, cheap way of generating and evolving fairly good inventories.”

Krisnamurthi and Brown University colleagues Sam Saarinen, Kathi Fisler, and Preston Tunnell Wilson wrote more about this in their paper Harnessing the Wisdom of the Classes. They have also provided supplemental materials that we encourage you to check out.

By now we might have piqued your interest in concept inventories. Here’s a podcast from Michelle Smith, an Associate Professor in the School of Biology and Ecology at the University of Maine. In this podcast from Teach Better, Professor Smith gives advice for finding, creating and/or giving them.

What’s another approach to a successful end-of quarter exam? In another article from Teach Better, the author Doug McKee writes that they tried something new for an online class – a collaborative exam. “I strongly encouraged individuals to complete the exam on their own first and then meet to discuss their solutions—If you think this sounds an awful lot like a two-stage exam, you’re right! After the fact, many students told me they learned a lot during the exam through this collaboration process.”

How successful was this experiment? Well, pretty successful if you measure by student scores. But McKee realizes that there might have been students who simply copied other students’ work. He writes, “I’m seriously considering a hybrid approach where I first ask them to take a short (say 10 question / 30 minute) randomly generated multiple choice exam on their own. Then they take a collaborative exam like the one I gave as a midterm. It would be much less work than a full-on multiple choice set up, but it would still let me identify those students who have no idea what’s going on and free-rode on the midterm. The collaborative piece would let me ask tougher questions and keep all the learning that happens during the exam. Their score would be a weighted average of what they get on the two parts.”

These are just some thoughts on generating exams and exam questions, whether in an online class or not, and if you have other ideas you’d like to contribute please leave your comments!

Comments closed

Student Feedback Loops

A few years ago, edutopia, an excellent resource for faculty teaching at all levels, shared an article by Taylor Meredith on student feedback loops. Meredith writes, “A feedback loop is a process of checking for and affirming understanding that is specific, non-evaluative, manageable, and focused on a learning target.”

What Are Student Feedback Loops?

This process aims to move learning forward through feedback. Ideally, this feedback loop would happen frequently, in all subject areas. Meredith offers these steps as a way to start the process:

1. Begin With an Aim

An aim is a learning target or essential question that is unpacked from the standards, a part of a learning progression that is clearly communicated to the students at the beginning of each lesson.

2. Feedback Exchange

Feedback should be specific, non-evaluative, manageable, and focused on the aim. If the aim for the day is that readers should structure reasons to develop a compelling argument in a research-based essay, all feedback exchanged should be focused on that aim.

3. Revision and Application

In order for feedback to be effective, students must be given time to revise and apply their new understandings or ideas. Susan Brookhart and Connie Moss, authors of Advancing Formative Assessment in Every Classroom: A Guide for Instructional Leaders, speak of the Golden Second Opportunity, that moment when feedback is grasped and applied. When a student takes the feedback, makes changes to his or her work, and as a result moves a step closer to meeting the desired learning of the day’s aim, then the loop has started. It is authentic, purposeful learning. The teacher begins the process, but the student owns it.

4. Reflection

Closing the loop is time to reflect on the aim. Did students meet the desired learning of the day’s aim? Could they move to a different level of proficiency? Could they ask for more feedback? Are there any other areas to revise?

In student feedback loops, students are the ones who drive this process. The teacher supports the students by clearly defining a structure for feedback, modeling effective feedback, highlighting critical student feedback, and participating when necessary.

That’s Meredith’s approach to giving students feedback. Let’s look at the feedback process as a way to improve your teaching. Consider formative assessments such as the Minute Paper as a way to solicit comments from students about the class lesson or a just completed project. Spend several minutes at the end of class and have students do a quick write (anonymously) to determine if they had difficulties, felt the directions weren’t clear, or perhaps they were able to correctly summarize the topic. Using formative assessments to hear what students have to say about your class can help improve our dialogue with students, and help students develop a sense of belonging. If you collect student feedback, though, make sure you respond, and that your response comes quickly. Tell students what you learned (For example: I heard you say that the directions on the project weren’t clear, and I will make sure to check in with you about directions before the next project is assigned.)

Learn more about using formative assessments in your classes by reading this edutopia article, 7 Smart, Fast Ways to Do Formative Assessment.

Resources:

Visible Learning for Teachers

Comments closed

Revisiting Online Quizzes

The Teaching and Tools workshop series included two seminars with a tongue-in-cheek title “Beat the Cheat.” The first session was a broader exploration of the general premise of exams as an assessment tool (spoiler alert – Derek is an occasional skeptic), and the second session explored some of the Canvas features that allow for “security” measures when online quizzes are offered.

Feel free to take a listen to the Podcast versions here:

Part One podcast

Part Two podcast

You can also access the transcripts here:

Beat the Cheat part one transcript

Beat the Cheat part two transcript

And the handouts from the in-person workshops are available as well!

Beat the Cheat part one handout

Beat the Cheat part two handout

 

Comments closed

What Words Appear Most Frequently in Our Course Learning Objectives?

Here’s a fun way to spend a few minutes, assuming you’re the kind of person who enjoys looking at things like course learning objectives. (Is there anyone who doesn’t?)

This is a word cloud representation of the current course learning objectives for most of EvCC’s courses. This is generated using Voyant Tools, a online text analysis platform that can do all sorts of neat and sophisticated things with large quantities of text.

By default, the word cloud displays the most common words appearing in the collected course learning outcomes across all departments and divisions. You can move the Terms slider to display fewer or more words. If you’d like to look at the outcomes for a single course, click the Scale button, select the “Documents” option, and then choose the specific course you’re interested in.

I find this visualization interesting to think about in relation to Bloom’s Taxonomy of Educational Objectives (a nice web version can be found here). By removing a lot of the domain- and subject-specific words that often appear in learning objectives, the word cloud view illuminates some of the broader categories of learning our courses identify as essential to a student’s progress through a given course and program of study. Looking at these categories in terms of their position along Bloom’s spectrum of lower-to-higher-order thinking strikes me as productive and potentially revealing exercise: what should we make of the prominence of words like “demonstrate,” “describe,” and “identify” and the diminutive size of “analyze” and “create”?

Comments closed

A First Look at Equity in eLearning

As EvCC has continued its Guided Pathways efforts over the past year, equity has been frequently discussed as essential to  helping students make informed decisions about their education and future careers. In a post on the Guided Pathways blog last spring, Samantha Reed discussed some of the ways that increased awareness of equity considerations can help programs identify gaps in outcomes, thereby creating openings for change that will help us “make sure our institution serves all our students equitably.” More recently, Director of Institutional Research Sean Gehrke has been posting on using data to identify equity gaps. Equity was also a topic of discussion at the summer meeting of our system’s eLearning Council, where we noted as a clear priority the need for more research on “equity gaps in applying technology to learning” and “structural barriers to access to technology-mediated instruction.”

Prompted by some of these ongoing conversations, I decided to do a little initial investigating of my own to see where there might be obvious equity gaps in the context of eLearning at EvCC. The real work of examining equity is difficult and potentially requires multiple types of data in order to get meaningful analytical purchase on its many dimensions. So as a somewhat easier starting point, I posed a fairly simple question: “Are there significant differences between student populations in face-to-face and online courses at EvCC?” Granted, that’s probably a diversity question rather than an equity question–but it creates necessary space for considering those more challenging equity issues in online learning. Once we have a better sense of who might be missing from online courses, we can take up the questions of why they’re missing and how their absence may by symptomatic of systemic inequities.

To answer my question, I turned to our institutional Enrollment, Headcounts, and FTE Tableau dashboard (thank you, Sean!) and starting crunching some numbers.

Comments closed

Incorporating Metacognition Practices Mid-Quarter

Image result for metacognition definition

At a recent conference on departmental support of evidence-based teaching practices in Biology (PULSE), I picked up two Metacognition techniques to bring into my classrooms. These seemed so powerful and, honestly, easy to implement, that I did it the following week.

This first idea stems from work that Ricky Dooley (new colleague in Biology) developed with Scott Freeman and others at the University of Washington. In my majors’ Biology class, I have weekly quizzes over the past week’s material.  Standard in-class quizzes, mostly multiple choice (taken with iClickers) with a short answer question here and there. Student performance was mixed, and when we went over the correct answers, many students had “ah-ha” moments when ideas began to click.

Of course, these ah-ha moments were a few moments too late to help on that particular quiz. What I’ve begun doing is flipping that around. First off, I’ve moved this quiz completely onto Canvas. And rather than the usual 10 questions/10 points, they are now 20 questions, still worth 10 points. The first question is the usual question I would ask (although I’ve added more short-answer questions, reflecting questions I will ask on the exams.). This first question (and all of the odd-numbered questions) are worth zero points, so there’s no risk to the student to do their best from their memory (no reason to cheat). The second question (all of the even-numbered questions) is the same question, followed by how I would answer the question. This question then asks the student if they think they got it right, wrong, or somewhere in between. If they didn’t get it right, I ask them 1) explain why they got it wrong, 2) what the right answer is, and 3) why is the right answer correct. This question is worth 1 point, and I grade it based upon how they’ve reflected on their work. Sometimes, within their summary explanations, students will still not fully understand the material. Here, it’s very easy for me to jump in (while grading) and help them individually. An additional benefit is that these quizzes, with the addition of more short-answer questions, more closely resembles the question types I have on my midterms.

The first time I did this (in the 5th week of this quarter), my last question asked the students their opinion on this new style of testing. With the exception of the one student who was already doing exceptionally well, feedback was very positive. They appreciated the ability to correct themselves, and feel that they better understand the material. Their explanations seemed genuine to me, so I’m hopeful that they’ll perform better on our midterms.Image result for metacognition

The second idea I implemented I borrowed from another biology colleague, Hillary Kemp. This I’ve done with my non-majors Cellular Biology course, one that is typically tough for many students, as they begin their path towards an allied health degree.  Exam performance on my short-answer questions is always spotty (lots of higher-order Bloom’s Taxonomy questions).  Usually I would go over the correct answer with the class, in the hopes that they’d do better on the final. Now, rather than go over those answers, I give them their marked-up short-answer sections back, and let them correct their answers for partial credit. I stress that in their corrections I’m looking for them to explain why they got it wrong, and why the correct answer is correct. This is worth just enough to eliminate the need to curve the exam (essentially, they’re working to “earn” the curved points). In my large class (n=48), results were mixed. Many students clearly explained why they got it wrong and understand why the correct answer is correct. However, others just put down correct answers or, worse, Googled the answer and put down technically correct answers, well above the level of our course. Again, I awarded points based upon their explanations rather than the correctness of their answers. I think this exam reflection is helping those students who genuinely want to do well in class, as opposed to those who are maybe not too sure about this degree path. I’m hopeful that performance on our comprehensive final will show improvement because of this reflection exercise.

This post was generously contributed by Jeff Fennell, who teaches in the Biology department at Everett Community College.

Comments closed

Friday Fun with Core Learning Outcomes

As our college has been gearing up for its accreditation site visit, which happens next week, I’ve been thinking quite a bit about our seven Core Learning Outcomes (CLOs). Naturally, they play a fairly large role in the self-evaluation report that we’re submitting to the accreditors, so that’s one reason they’ve been on my mind. But I’ve also been thinking about connections among those college-wide outcomes, and how those connections inform in various ways EvCC’s ongoing Guided Pathways work.

Noodling about on that topic recently, I found myself wanting to visualize how many CLO connections there actually among the courses we offer. In particular, I wondered whether there might be certain clusters of courses that all support the same CLOs, even if the courses themselves are part of different departments, programs, or degree/certificate pathways. Here’s the visualization I came up with:

To get a sense of how courses in different divisions are connected to one another via shared CLOs, click on the name of a division you’re interested in. This will highlight all courses within that division. You can also click and drag any of the nodes representing an individual CLO; this will lock in place wherever you release it, which can make it a bit easier to see which courses are clustered around specific outcomes. Hover your cursor over an individual course to reveal the specific CLOs it introduces. (A larger view is also available.)

Do you see any interesting patterns in the network? Are there any groupings of courses you might not expect? How might visualizations like this help us see new connections or patterns that could help us approach our Guided Pathways efforts–particularly the process of developing pathways and program maps–with fresh ideas or insight into possible points of intersection across our college’s academic divisions?

Comments closed

Mid-Quarter Feedback

Student voice matters.

I think there is no way we can dispute this. If we wait until the end of the quarter and expect students to provide valuable information on their learning experiences in our class, chances are the disgruntled students (and aren’t there always a few?) will let us know what went wrong. Why don’t students tell us earlier if they want changes made in the course (and I’m talking about classroom activities, not course content)?

Because we didn’t ask them.

Mid-quarter check-ins are perfect opportunities to get feedback on how things are going. You may be familiar with my all-time favorite, PLUS/DELTA. A simple grid with 4 spaces for students to reflect on not only what the teacher is doing to help them learn (PLUS) and what the teacher can change to help learning (DELTA), it also includes spaces for students to identify their own behaviors that are helpful and those that should be improved upon.

The most important part of this process is reading and reviewing the anonymous (and I believe it should be anonymous) feedback from students, and then responding, closing the feedback loop.

Here’s the story of the first time I used PLUS/DELTA: I gave each student in the class a copy of the grid and assigned it as a reflective assignment for that evening. The following class period I asked students to team up in groups of 3-5 and to look for trends in the areas related to me, the teacher. I gave each small group another copy of the grid, and in about 10 minutes each group had 3-5 items in both columns, PLUS and DELTA. That night I read over the small stack of papers – only 1 from each team – and formulated my response. It started something like this: “Thank you all for your thoughtful responses. Let me share with you the things that you’d like me to continue doing in the class that are helping you to learn (and I put that list on the teaching station to share). Now let me share with you the things you’ve suggested I change (and I put that list on the teaching station). As you know, I can’t stop giving exams, but I can change the day of the week.” And so on. Interestingly, the class became much more engaged after that exercise! Before the end-of-quarter evaluations that quarter, I reminded students that their feedback helped make this a better class and me a better teacher, and I reminded them of the changes that were made because of their feedback. The next quarter when I was reviewing my IDEA results, I was pleased to read this student comment: “No one ever asked me before how I would change the class. Thank you!”

If you want to get feedback on a more regular basis, here’s one I found by a copy machine recently (I can’t give credit because there was no information on the handout!)

Directions: Please fill out one or both squares and drop in the basket up front before you leave. NO NAMES PLEASE! This is anonymous!

MURKY

This week, what part of the lesson, or what point is still a bit unclear to you? What are you struggling with? And what could I, your instructor, have done/can do to make your learning easier?

CLEAR

This week what part of the lesson, or what point, was finally made clear to you? What was your “ah ha!” moment? And/or what did YOU do this week that made your learning easier?

 

Do you have favorite anonymous feedback examples you’d like to share? Let us know!

Comments closed