In Spring 2015 I taught a graduate biophysics course for the first time. It was a first in several ways: the course didn’t exist before, so I developed it from scratch, and it was also the first graduate course I’ve taught in my nine years as a professor! I’ve been thinking for months that I should write a summary of how it went, especially because such classes are uncommon enough that describing what worked and what didn’t work might be useful for others.
Overall, the class was a great success. It was fun and rewarding to teach — though it took a lot of work — and the students seemed to get a lot out of it. There were ten graduate students and one undergraduate enrolled, which is large for a graduate elective at Oregon. The student evaluation scores were the highest I’ve ever received, averaging 4.7 out of 5.0 in seven categories.
Here, I’ll describe some of the topics we explored. In the next post, I’ll describe the structure of the course: in-class activities, books and other sources, student projects, and more.
Topics
There were several themes I wanted to cover:
- the major roles that statistical mechanics and, relatedly, randomness and probabilistic processes, play in biophysics
- the mechanics of cellular structures
- cellular circuits: how cells construct switches, logic gates, and memory elements
- special bonus theme: amazing things everyone should be aware of
A random walk through biophysics
Much of the course explored the roles of statistical mechanics and, more generally, randomness and probabilistic processes, in biophysics. This included the physics of random walks and Brownian motion, and experimental methods for measuring diffusive properties of proteins and other molecules. We spent quite a while exploring how Brownian motion and other physical constraints impact the strategies that microorganisms use to perform various tasks. For example:
- Why aren’t there microscopic “baleen whales,” that scoop up nutrients as they swim through water?
- Why is it a good idea for a bacterium to cover just a tiny fraction of its surface with receptor molecules?
- Why are bacteria small? How can some bacteria be huge?
- How can bacteria migrate towards regions of higher nutrient density? What are the physical limits on the sensitivity of chemotaxis, and how close do bacteria come to these limits?
I’ve commented on some of these topics in past blog posts, for example this one on the non-intuitive nature of diffusion-to-capture.
More generally, we studied several examples of how understanding probabilistic processes enables insights into all sorts of systems. These ranged from recent examples like using brightness fluctuations to quantify the number of RNA molecules in single cells, and also discover “bursts” of transcription (Golding et al. 2005), to classic examples like the famous Luria-Delbrück experiment. In all these cases, a deep message is that that probability distributions encapsulate a great deal of information. The variance of some quantity, for example, may be as informative as its mean.
The mechanics of cellular structures
Understanding the physical properties of biological materials and how they matter for the functioning of living things is central to biophysics, and so we of course discussed the rigidity of DNA, the electrostatics of viral assembly, phase separation in lipid membranes, and other such topics. The connections to randomness and statistical mechanics are clear, since entropic forces and thermal fluctuations are huge contributors to the mechanical properties of these microscopic objects.
As one of many examples of the interplay between energy and entropy, I’ll note here DNA melting — the separation of the two strands of a DNA double helix at a particular, well-defined temperature. Before examining it, we learned about PCR (polymerase chain reaction), the method by which fragments of DNA are duplicated over and over, enabling bits of crime scene debris or tainted food to be analyzed for their genetic fingerprints. Repeated cycles of melting and copying are the essence of PCR, so understanding DNA melting of practical concern, as well as being very interesting in itself. Why does DNA have a melting temperature? This is a question whose answer seems obvious, then less obvious, and then interesting as the amount of thought one puts into it increases. At first, one might find it unsurprising that DNA separates at some well-defined temperature. After all, water melts at some particular temperature, and countless other pure materials have well-defined phase transitions. Looking further, however, one can think of DNA as a “zipper” whose links form a 1-dimensional chain, each with a lower energy when closed (base-paired) than open. With a bit of statistical mechanics, it’s easy to show that this chain won’t have a sharp melting transition, but rather will gradually open with temperature — a common property of one-dimensional systems [1]. The puzzle is resolved, however, by properly considering entropy: the double-stranded DNA might open at points in the middle, forming “bubbles” of open links (see below). These links cost energy but, crucially, increase the entropy of the molecule, since the bubble halves can wobble and fluctuate. Above a critical temperature, the entropic free energy wins over the energetic benefit to staying linked — bubbles grow, and DNA melts!

One of the things I especially like about the course is that we can consider “universal” materials like DNA and membranes, but also very specific materials, manifestations of the variety of life. For example, we looked at studies of Vorticella, a one-celled organism that can propel itself several body lengths (hundreds of microns) in milliseconds by harnessing the power of electrostatic forces to collapse bundles of protein fibers.
Cellular circuits: how cells construct switches, logic gates, and memory elements.
Cells do more than build with their components, they also compute — making decisions, constructing memories, telling time, etc. Our understanding of this has blossomed in recent years, driven especially by tools that allow us to create and manipulate cellular circuits. My own thinking about this, especially with respect to teaching it, was influenced heavily by Philip Nelson’s excellent recent textbook Physical Models of Living Systems, which I’ll comment on more in Part II.
We began by learning the basics of gene expression and genetic networks and then moved on to feedback in these networks and schemes for analyzing bistable switches. The physical modeling of these circuits leads to two interesting observations: (i) that particular circuit behaviors are possible in particular regions of the parameter space, which correspond to particular values of biophysical or biochemical attributes, and (ii) that the analysis of these sorts of networks is exactly the same as that of other dynamical systems that physics students are used to seeing. Neither of these are surprising, but they’re worth discussing, and they tie back to my question to myself before the course of whether to include this topic of cellular circuits. In retrospect, I’m very glad I did, not only because it’s important, but because it highlights the power of quantitative analysis in biological systems separate from concepts of mechanics or motion. Since this sort of analysis is deeply ingrained in physics education, it provides yet another route for physicists to impact the study of living systems. Of course, it doesn’t have to be so. One could imagine a world in which mathematical analysis was as ingrained into biological education as it is in physics, but despite occasional pleas to make this happen, such a world is far removed from ours.
Amazing things
I decided to end the course with some very amazing, very recent developments in how we examine or understand the living world, regardless of whether or not one would classify them as biophysical. I picked three. I’ll pause for a moment while you guess what they are… (While waiting, you can look at two more owl illustrations. The one at the top is mine; these are from the kids. All are based on photos from the excellent Owls by Marianne Taylor.)
Ready?
One was CRISPR / Cas9, the new and revolutionary approach to genome editing. As readers likely know, CRISPR has generated a frenzy of excitement, more than any scientific advance I can think of of the past decade. While tools for manipulating genomes have existed for a while, CRISPR / Cas9 provides a method to target essentially any sequence simply by providing a corresponding sequence of RNA that guides a DNA-cleaving enzyme. This would be worth covering just for its scientific impact, but it more broadly brings up issues of ethics and social impact. How could one go about, for example, engineering human embryos, or destroying pathogenic species? Would one want to? The story behind CRISPR provides a great illustration of the power of basic science. Its discovery in bacteria, from studies of their battles with viruses, was quite a surprise. It’s likely that surprises of similar magnitude still await us in unexplored corners of the living world. Connecting CRISPR to biophysics isn’t hard, by the way, since its mechanisms of operation are closely tied to the mechanics of bending and cutting DNA.
The second “amazing” topic is DNA sequencing. The cost of sequencing has fallen by orders of magnitude in recent decades. We’re close, for example, to being able to sequence an entire 3 billion base pair human genome for $1000! All this is driven by physically fascinating technologies — for example, detecting the ions released from a single nucleotide being added to a growing DNA strand, or the electrical current fluctuations as a single DNA molecule snakes through a nanopore.
The final amazing topic was optogenetics, the optical control of genetically encoded control elements. Using light-activated ion channels, for example, researchers can selectively turn neurons on and off in live organisms, a real-life version of science fiction-y mind control. Here again, the connections between technology and basic research are clear. Channelrhodopsin, one of the first and most useful proteins to be used and modified for optogenetic ends, was discovered in studies of unicellular algae.
Overall, this excursion was great. It tied into the main substance of the course better than I expected, and the students clearly shared my excitement about these topics. It was also noted that this sort of connection to cutting edge developments is sadly lacking in most physics courses.
Next time…
In Part II, I’ll describe some of the “active learning” approaches I implemented, which went well with one exception, and I’ll also discuss books, readings, and assignments. (For a glimpse of all this, you can see the syllabus.) I’ll note both then and now that all of my materials for the course are available to anyone thinking of teaching something similar — feel free to email me.
Notes
[1] For a simple treatment of the “zipper” problem, see C. Kittel, “Phase Transition of a Molecular Zipper.” Am. J. Phys. 37, 917–920 (1969). The paper generally considers the case of a zipper in which each link has one closed state and “g” open states. The g=1 case is quick to consider, and is a nice end-of-chapter exercise in Kittel and Kroemer’s Thermal Physics (an undergraduate statistical mechanics textbook), which is where I first encountered it. For g=1, there is no sharp phase transition. The g>1 case gives a sharper transition, but one shouldn’t spend much time thinking about it, since it’s much more realistic to think about bubble formation rather than DNA unzipping from its ends.