I’ll guess that most people reading this don’t believe in homeopathy, astrology, or the existence of lizardlike extraterrestrials that walk among us. This is probably not because any of us ourselves have researched these topics, but rather because we are unconvinced by their proponents, and also perhaps because these ideas have not managed to break into the a body of what we consider the scientific literature.
The state of this literature is the topic of lots of hand-wringing at the moment, but largely missing from this, I’ll note, is a discussion of how the scientific literature informs what the general public thinks of as science and not-science. Before getting to that, let’s look at the question of why we publish scientific papers in the first place:
Why do we publish?
Here’s a non-exhaustive list, in no particular order.
- To communicate our findings to other scientists and also to the broader world
- To critique and discuss others’ work
- To establish priority of discoveries and techniques
- To document findings for the historical record
- To document our activities for administrative entities like funding agencies, promotion committees, thesis committees, etc.
- To generate proxies for research quality — based on things like what journal a paper is published in, for example. (I’m not claiming that this is a good thing…)
- To establish the validity of scientific findings. (Again, I’m not commenting for now on the validity of this goal!)
Problems with publishing
As mentioned, hardly a week goes by without some intense discussion of or essay about items 1 to 6. It’s very hard, for example, to publish corrections to or critiques of published work. Publishing takes a very long time, which gets in the way of sharing new results. Most published papers are wrong (at least in biomedical research). Everyone hates using journal impact factors or reputation as a proxy for research quality, but everyone does it anyway.
I could go on about all these, but what struck me recently is that I’ve encountered many of discussions of items 1-6, but almost none about #7. When teaching science courses for non-science-majors, or interacting with the general public, how do we convey a notion of what “is” and “isn’t” science? It’s an important question with real-world consequences — we’re all aware of, and perhaps know, people who don’t vaccinate their children based on vague notions of danger, or who believe in homeopathic cures, or other such things. Why does the topic of communicating with the public so rarely come up in discussions of scientific publishing? In general, such conversations occur between scientists, and are moreover disjoint from conversations about teaching or interactions with non-scientists.
What is & isn’t science
Answering the italicized question above, of how we can convey to non-scientists a notion of what “is” and “isn’t” science, is difficult. There is, I think, a good but hard to implement answer, and a not-very-good but easier to implement answer.
Peer review as a marker
The less good / easy answer is that the framework of peer-reviewed publishing provides a way for non-scientists to know what’s reliable and what’s not. In other words, the scientific community’s assessment, “X is science,” is reflected in “X has been reviewed by other qualified scientists, and passed this test in order to appear in the scientific literature.” In general, this isn’t a bad correspondence, for all its flaws. Some random quack’s blog post on homeopathy is less likely to show signs of rigorous, logical testing than a peer-reviewed article.
One can certainly find statements in articles about communicating with beginning students and other non-experts that reflect this idea that peer-reviewed scientific publishing confers legitimacy, and that the existence of peer-review allows non-scientists to assess the credibility of sources of information. For example:
We developed a two-tier set of criteria for evaluating scientific literature that can be used in traditional and nontraditional learning environments, and can be presented to science majors and nonscience majors alike. … either it is published in an authoritative source or it is not. Authority is a measure of the reputation of the publication and the authors it publishes. We have found this to be too vague and have settled on peer review as the indication of authority. These are not statements of value, but they are designed to get students thinking about the nature and types of literature … From: Karenann Jurecki and Matthew C. F. Wander (2012) Science Literacy, Critical Thinking, and Scientific Literature: Guidelines for Evaluating Scientific Literature in the Classroom. Journal of Geoscience Education: 2012, 60, pp. 100-105. doi: http://dx.doi.org/10.5408/11-221.1
It’s hard, however, to read the excerpt above without one’s skin crawling a bit. Every scientist knows that a lot of peer reviewed papers are awful, and their claims poorly justified. (Conversely, a lot of blog postings or other unorthodox writings are very good.) Methodological flaws, especially having to do with poor statistics and a lack of understanding of noise and randomness, are endemic in the peer-reviewed literature. High-profile journal routinely peddle splashy findings that oversell their data. Finally, there is fundamentally no good reason to think that “truth” is established by two or three random reviewers approving of a paper. (This randomness works the other way to, as all of us who have dealt with fine papers being rejected can relate.)
Trust no one!
The better but difficult answer to the italicized question is that there’s really no way around the necessity of evaluating claims with a critical eye, no matter where they appear. Peer review is no panacea. Fine, one says, what’s so hard about that? People who don’t routinely interact with non-scientists in an academic setting vastly overestimate, in my experience, the sophistication that the general consumer of media (articles, videos, etc.) brings to the information they’re presented with. Issues of noise, uncertainty, p-hacking, model fitting, etc., are high level concepts compared to basic features of quantitative thinking and logical inference that many people struggle with. Having spent time helping college students understand how many kilograms 100 grams is — admittedly, an atypical example, but a real one — it doesn’t surprise me that conveying deeper concepts takes a lot of time and effort. It’s doable, and it’s worthwhile, but it’s not simple. How, then, does it scale to asking that the general public critically evaluate everything they’re exposed to, essentially on their own? This, I think, is the challenge we need to address.
Scientific publishing: a lost opportunity?
Of course, one can respond that the public isn’t on their own — sources of reliability will emerge via popular consensus, elite “status,” or other magic. I am skeptical. And even if so, it seems tragic that by allowing the state of scientific publishing to decline to the extent that there are compelling reasons to abandon peer review altogether, the scientific community may be giving up the chance to provide a useful service to the general public. Put differently: if we really did do peer review well, it would benefit more than just scientists.
Today’s illustration
I painted this from a photograph, “Fruit of the beech tree,” in the beautiful book Trees up close (text by Nancy Ross Hugo, photos by Robert Llewellyn). Echoing the theme of a previous post, I found this randomly in our Art and Architecture library, where it was lying on the floor of an aisle.
Raghu, this is a thoughtful post but you present a view that is too idealistic. Surely your #5/6–essentially “scientists try publish in Nature/Science/Cell so NIH will fund them”–is an order or two of magnitude more dominant than the other reasons!
Followed in importance by “scientists publish their research so they can become famous, at least in their field.”
Thanks. Yes, the list is not ranked! I’ll try to comment later on a depressing presentation from NIH, on how they’re quite explicitly using “publications per dollar” and “citations per dollar” as a way to assess programs. It’s much easier, of course, than actually assessing science!
Eek, Raghu! That is terrifying!
Now at the U of O, we can add “publish so that Associate Deans and Department Heads do not consider them research inactive and tell them to teach more classes”.
I think publication units should be much smaller. Right now we forgive a paper for not being completely convincing on a a gene being required when the data show convincingly that it is sufficient for a phenotype, or just that the broad conclusion is exciting. If a publishable unit was a single declarative statement then full attention could be brought to each incremental unit of discovery. These units could be easily linked by reviews. Plus, it would allow groups to work on aspects they are good at, so could efficiently churn out results showing transcriptional up regulation of a gene without having to get those pesky Western blots working to show protein levels increase as well.
Good point — it does seem like papers (especially in Biology) require a giant amount of “stuff,” with a preference for lots of weak data over a little strong data. (Why? I don’t know.) Your suggestion, though, would lead to *even more* papers being published than the present downpour, if people cut their papers into many smaller pieces. Of course, perhaps:
(i) the length of each paper would be shorter, so the total number of published pages would be the same, and
(ii) people would unlearn the “lessons” of modern-day publishing and only put out robust little pieces, not publishing the weak little pieces.
I think (i) might happen, but it might be outweighed by having *lots* of abstracts to wade through, and I’m skeptical of (ii) happening.