The Fall term here will start next week, and I’m once again teaching Physics of Energy and the Environment, a course for non-science-majors, that I’ve written about a few times before. Here, I’ll describe two experiments related to teaching this course, with the hope that the descriptions will be useful to others. One experiment was successful; one was not!
1. A writing exercise
Many discussions of teaching end up touching on “metacognition” — getting students to think about what they’re thinking. It’s easy to see why this should help students learn. It is, after all, what good students typically do, whether they realize it or not — probing their own understanding well enough to figure out that they have a question to ask, reviewing notes while critically asking themselves if they understand what’s written there, etc. How can one get more students to reflect on what they’re doing?
It has occurred to me many times that writing a paragraph after class on what the point of that class was would be a worthwhile activity, not too time-consuming, but requiring some thought to identify key concepts. I’ve implemented this twice in 100-level, general education classes. The first time was a failure. Despite instruction and examples, many students simply listed the events of the class (we took a quiz; we saw a demonstration of an engine; …) rather than their meaning. Grading these takes time, and in a large class it was impossible to adequately reward good responses and critique poor ones. The second time (last Winter) was far better. What changed? I realized that careful grading was crucial to getting this to work, and to make this feasible, we graded, somewhat randomly, only a quarter of the class each time. “We” was my teaching assistant Teddy Hay, who is also a grad student in my lab (doing great work on machine learning, bacterial competition, and such things). The wording in the syllabus was as follows:
Post-class notes. Briefly reviewing what one learned from a class session helps cement one’s understanding. Within 24 hours of the end of each class, submit a short (about 100 words) summary of what the key points of that day’s class were. You can also describe things that were unclear or that need further explanation. These will be submitted on-line, via Canvas. Grading: The notes will be graded on content (i.e. that they capture something important about the day’s lessons) and clarity. We’ll give examples of good and bad notes. Because of the size of the class, we can’t grade everyone’s notes. We will somewhat randomly select ¼ of each set of submissions to grade, the “somewhat” meaning that we’ll make sure that everyone gets evaluated a roughly equal number of times.
This was a success — most of the notes displayed a good level of thought and reflection. In the course evaluations (more on that later), several students pointed out that they were initially skeptical of this activity, but ended up finding it worthwhile. In my mid-course evaluation, in response to “What’s working well for you in the course?” the post-class notes got the second-highest number of responses.
Did this exercise actually help students understand of the course material? I don’t know. But here’s one interesting and encouraging numerical assessment I did: Listing questions on the midterm exam that were similar to ones on my exam from the previous year, I compared the mean scores in each year, as plotted below. Data points on the diagonal would indicate the same average score in each year; points above the diagonal indicate that the average this year was higher than on the previous year’s similar question:
As you can see, nearly all the points are above the diagonal. The post-class notes were the only structural change in the course. Of course, this doesn’t prove that they were responsible for the improvement; there are many other possible explanations, including simple random noise. Still, it’s encouraging.
Take-home lesson: Having students write post-class notes can work!
2. A scheduling exercise
I was happy with how the Physics of Energy and the Environment course went. Students seemed to like it also. The course evaluations have always been good, well above the average for similar level general education courses (as noted here), but in this past instance they were higher than they’ve ever been: my “course” and “instructional” quality scores were both 4.6 out of 5.0. The class typically has about 60 people, which just about fills the room it’s usually in. So, I thought, students like the course, and I think it’s an important course, so let’s try to get more people to take it by holding it in a larger room. Finding an available larger room is challenging, but with help from my colleagues who deal with scheduling I managed to find an excellent room in a different building that can hold twice as many people. The class is scheduled to be there this coming term.
The class enrollment, as of today: about 30!
Why? I don’t know. An embarrassing possibility is that I’ve messed this up myself. The “catch” with getting this room was that it was available at a non-standard time, in between usual afternoon scheduling blocks. It therefore might result in more than the usual number of conflicts with people’s schedules, which didn’t occur to me at all when agreeing to the room. I may, therefore, have thwarted my own plan. A colleague of mine noted that it’s just like The Gift of the Magi.
Of course, there are other reasons the course might have low enrollment. The response rate for the course evaluations is about 50% (which is typical), so perhaps the non-responding half hated the class. Or perhaps there is an abundance of other gen-ed science courses to choose from this term. Or perhaps it’s just random noise. I suppose I shouldn’t really care: teaching is exhausting, and it’s better for me and for the students to have smaller classes. Still, it’s a bit discouraging.
Take-home lesson: Course scheduling, and course enrollment, are unfathomable mysteries.
Update (Sept. 28, 2017): Despite having about 30 people enrolled in the course a week before it started, there were 55 signed up on the first day. Why so many last-minute enrollments? I have no idea. But it does suggest that my second experiment wasn’t too much of a failure.
Today’s illustration…
A crayfish. Last month, the kids and I found several crouched among rocks at the wetlands near our house. The painting is based on a photo, not live observation — they move too fast!