This past term (Fall 2021) I taught a course on image analysis. It was a new course — not just new for me but completely new, though it grew out of an informal image analysis class I’ve taught “off the books” before and have posted about. The class went well, though I severely underestimated how much work it would take to transform and informal course into a real one. In this post, I’ll comment on aspects of the course. The syllabus is here.
Is this a physics class?
Technically, yes. I’m in the physics department, the course was listed as a physics course, and the students were mostly physics students. The content, however, was a mix of computer science, physics, and statistics, with a lot of biological or biophysical examples. There exist courses at other universities on image analysis, but all the examples I found are in computer science or engineering departments. This makes sense — the bulk of image analysis is programming and algorithms — but it also makes sense to learn about how one applies these methods to scientific data, and more importantly how science, especially optics, constrains what one can do with image data.
Localization-based super resolution microscopy is an illustrative and important example. One can write algorithms for estimating the location of a pixelated point source of light, but understanding how the combination of diffraction and statistics sets limits on accuracy, assessing these limits, and seeing how they apply to, for example, the steps taken by a molecular motor protein is not only satisfying, but important for doing good science. Even though, as I told the students on the first day, they’ll learn less physics in this course than in any other physics course they took, I think that the physics we covered was crucial. The answer to “Is this a physics course?” is therefore “yes.”
Besides “yes,” the other acceptable answer to “Is this a physics course?” is: “I don’t care.” A key goal was that students would learn tools — practical algorithms and functions that help with a wide variety of analysis tasks. As such, I don’t particularly care what subject category these tools fall into, especially since they’re not taught in standard science curriculum.
The course involved a lot of programming, not surprisingly. This wasn’t the place to learn to code, however, which I informed students of beforehand. Programming ability was self-assessed, though, and while most students were well poised to do well, some were not. This required some struggle and some help from me, and overall turned out well. I think many students felt their skills improve over the ten weeks. I asked that students code in MATLAB or Python. I’m very adept with MATLAB (I’ll self-assess!), but my Python skills were minimal. I therefore decided to do all the assignments in Python as well as MATLAB, which took some effort but which was educational!
Regarding students, the broader problem, which I have not done anything to solve, is that we don’t do a great job of teaching students to program, stating that we expecting that they’ll be able to program, or assessing whether students have taught themselves to program. (The last one is the best route!) As a result, confusion reigns. My department, physics, is making some efforts to address this; it is challenging.
Interlude: book advertisement!
Not related to the course (though there are images!), my popular science book on Biophysics comes out in one month! It’s So Simple a Beginning; my post is here, and the publisher’s site is here. I just noticed yesterday that there’s an Amazon listing.
The topics covered included filtering, image formation, noise, localization algorithms, morphological operations, deconvolution, (a little bit of) machine learning, and more. There were a few things I hadn’t explored before, such as data compression algorithms. Huffman coding, by the way, is clever and elegant and has a neat history also, being invented by a grad student puzzling over a homework assignment. There are interesting historical bits in the course — a recurring theme was that there was a lot of low-hanging-fruit of algorithm development available in the 1960s-80s, and it was a great time to get an (in retrospect) obvious technique named after yourself! I started each class with an artwork that, in many cases, reminded me of the day’s topic. As an example, here’s Gerhard Richter’s 1965 painting, “Mutter und Tochter (B.) (Mother and Daughter (B.))”:
The course was cross-listed as an undergraduate and graduate course, but only had two undergraduates. There were 16 graduate students — the class filled — 9 of whom were masters students. There were initially three others auditing, but two (biology postdocs) dropped around week 4 — it is sadly still the case that programming skills are under-emphasized among biologists. It was a lively and motivated bunch, and the course was fun to teach. (As mentioned, it took a large chunk of my time, though!) Based on conversations, I think the students liked the course. Sadly, despite my pleading, only three filled in end-of-term evaluations — very positive, but hardly a good sampling. It’s not just my course; evaluation response rates these days are terrible. It doesn’t help that the evaluation forms themselves are long and confusing, asking students to comment on aspects like “inclusivity” rather than actual learning, course structure, or instructor competence.
The biggest deficiency in the course is the very minimal exploration of machine learning. We briefly explored support vector machines and had an optional homework assignment about them. We looked at the general idea of neural networks, but didn’t do anything hands-on with them. I had reasons for this: 10 weeks is a short amount of time; machine learning is a huge topic; machine learning especially with neural networks tends to be a “black box” from which it’s hard to extract insights; and the conventional (and often cutting-edge) ways of doing image analysis can serve as the inputs and assessments of machine learning approaches. Nonetheless, given its importance and prevalence, I hope to expand this part of the course the next time I teach it. Next year? Who knows…
Given the topic of this class, I had to channel Chuck Close and make a pixelated painting! (Close passed away last year, by the way, at age 81.) If you squint, you can probably tell what it is.
— Raghuveer Parthasarathy, January 7, 2022