Comments on over-interpreting results, correlation and causation, and concluding remarks — “Ten common statistical mistakes…” #9-10

This week: the last of my commentaries on Makin and Orban de Xivry’s Common Statistical Mistakes! (Previous posts: #1-2, #3 , #4, #5, #6, #7, #8.) I’m lumping together comments on “Mistake #9” (Over-interpreting non-significant results) and “Mistake #10” (Correlation and causation), as well as concluding remarks, writing one long post instead of two or … Continue reading Comments on over-interpreting results, correlation and causation, and concluding remarks — “Ten common statistical mistakes…” #9-10

Comments on “Failure to Correct for Multiple Comparisons” — “Ten common statistical mistakes…” #8

This week’s installment of comments on Makin and Orban de Xivry’s Common Statistical Mistakes deals with #8: Failure to Correct for Multiple Comparisons. (Previous posts: #1-2, #3 , #4, #5, #6, #7.) Makin and Orban de Xivry’s description is rather complex, but the error is a simple one. To illustrate: suppose we have a control … Continue reading Comments on “Failure to Correct for Multiple Comparisons” — “Ten common statistical mistakes…” #8

Comments on “p-Hacking (Flexibility of Analysis)” — “Ten common statistical mistakes…” #7

This week’s commentary on Makin and Orban de Xivry’s Common Statistical Mistakes covers #7: Flexibility of Analysis: p-Hacking. (Previous posts: #1-2, #3 , #4, #5, #6.) I feel like this has been discussed ad nauseum,* yet the problem still exists. The issue is that flexibility in how one analyzes data, even seemingly innocuous flexibility, can … Continue reading Comments on “p-Hacking (Flexibility of Analysis)” — “Ten common statistical mistakes…” #7

Comments on “Circular Analysis” — “Ten common statistical mistakes…” #6

Next in our series of commentaries on Makin and Orban de Xivry’s Common Statistical Mistakes, #6: Circular Analysis. (Previous posts: #1-2, #3 , #4, #5.) I was thinking of skipping this one entirely. It’s less dramatic than #5 or the upcoming #7, I’m not sure I fully understand the authors’ intent, and my seashore painting … Continue reading Comments on “Circular Analysis” — “Ten common statistical mistakes…” #6

Comments on “Small Samples” — “Ten common statistical mistakes…” #5

Continuing our series of commentaries on Makin and Orban de Xivry’s article on common Statistical Mistakes, let’s look at #5: Small Samples. (Previous posts: #1-2, #3 , #4.) This issue is simple but profound, and its prevalence is, I’ll argue, tied to more fundamental problems with how we do science. The mistake: drawing conclusions from … Continue reading Comments on “Small Samples” — “Ten common statistical mistakes…” #5

Comments on “Ten common statistical mistakes…”: #4

Continuing our series — see here for Part 1, and Part 2 — let’s look at Makin and Orban de Xivry’s Statistical Mistake #4: Spurious Correlations. This one is easy to understand, though nonetheless common. The authors refer to situations like the one illustrated in their Figure 2, shown below, in which the correlation calculated … Continue reading Comments on “Ten common statistical mistakes…”: #4

Comments on “Ten common statistical mistakes…”: #1 and #2

The steady stream of scientific articles with irreproducible results, shaky conclusions, and poor reasoning [1] is, thankfully, accompanied by attempts to do something about it. A few months ago, Tamar Makin and Jean-Jacques Orban de Xivry published an excellent short article called “Ten common statistical mistakes to watch out for when writing or reviewing a … Continue reading Comments on “Ten common statistical mistakes…”: #1 and #2

How do I hate p-values? Let me count the ways…

[Note: a long post of interest only to people who care about data analysis and bad statistics, and maybe about the distant stars influencing your life.] By now, we should all be able to list the many reasons that p-values (or null-hypothesis-significance-testing, NHST) are awful: that “statistical significance” has nothing to do with effect size … Continue reading How do I hate p-values? Let me count the ways…