This year’s UseR! conference was held at the University of California in Los Angeles. Despite the great weather and a nearby beach, most of the conference was spent in front of projector screens in 18° c (64° f) rooms because there were so many interesting presentations and tutorials going on. I was lucky to present my R package Bayesian First Aid and the slides can be found here:
beepr (former pingr) is on CRAN. It’s easier than ever to make R go beep!
Even though I said it would never happen, my silly package with the sole purpose of playing notification sounds is now on CRAN. Big thanks to the CRAN maintainers for their patience! For instant gratification run the following in R to install beepr
and make R produce a notification sound:
1 2 3 

Bayesian First Aid: Test of Proportions
Does pill A or pill B save the most lives? Which web design results in the most clicks? Which in vitro fertilization technique results in the largest number of happy babies? A lot of questions out there involves estimating the proportion or relative frequency of success of two or more groups (where success could be a saved life, a click on a link, or a happy baby) and there exists a little known R function that does just that, prop.test
. Here I’ll present the Bayesian First Aid version of this procedure. A word of caution, the example data I’ll use is mostly from the Journal of Human Reproduction and as such it might be slightly NSFW :)
The Most Comprehensive Review of Comic Books Teaching Statistics
As I’m more or less an autodidact when it comes to statistics, I have a weak spot for books that try to introduce statistics in an accessible and pedagogical way. I have therefore collected what I believe are all books that introduces statistics using comics (at least those written in English). What follows are highly subjective reviews of those four books. If you know of any other comic book on statistics, please do tell me!
I’ll start with a tl;dr version of the reviews, but first here are the four books:
Jeffreys’ Substitution Posterior for the Median: A Nice Trick to Nonparametrically Estimate the Median
While reading up on quantile regression I found a really nice hack described in Bayesian Quantile Regression Methods (Lancaster & Jae Jun, 2010). It is called Jeffreys’ substitution posterior for the median, first described by Harold Jeffreys in his Theory of Probability, and is a nonparametric method for approximating the posterior of the median. What makes it cool is that it is really easy to understand and pretty simple to compute, while making no assumptions about the underlying distribution of the data. The method does not strictly produce a posterior distribution, but has been shown to produce a conservative approximation to a valid posterior (Lavine, 1995). In this post I will try to explain Jeffreys’ substitution posterior, give Rcode that implements it and finally compare it with a classical nonparametric test, the Wilcoxon signedrank test. But first a picture of Sir Harold Jeffreys:
Bayesian First Aid: Pearson Correlation Test
Correlation does not imply causation, right but, as Edward Tufte writes, “it sure is a hint.” The Pearson productmoment correlation coefficient is perhaps one of the most common ways of looking for such hints and this post describes the Bayesian First Aid alternative to the classical Pearson correlation test. Except for being based on Bayesian estimation (a good thing in my book) this alternative is more robust to outliers and comes with a pretty nice default plot. :)
A Hack to Create Matrices in R, Matlab style
The Matlab syntax for creating matrices is pretty and convenient. Here is a 2x3 matrix in Matlab syntax where ,
marks a new column and ;
marks a new row:
1 2 

Here is how to create the corresponding matrix in R:
1


1 2 3 

Functional but not as pretty, plus the default is to specify the values column wise. A better solution is to use rbind
:
Oldies but Goldies: Statistical Graphics Books
I just wanted to plug for three classical books on statistical graphics that I really enjoyed reading. The books are old (that is, older than me) but still relevant and together they give a sense of the development of exploratory graphics in general and the graphics system in R specifically as all three books were written at Bell Labs where the Slanguage was developed. What follows is not a review but just me highlighting some things that I liked about these books. So, without further ado, here they are:
 Exploratory Data Analysis by John W. Tukey (1977)
 Graphical Methods for Data Analysis by John M. Chambers, William S. Cleveland, Beat Kleiner and John W. Tukey (1983)
 The Elements of Graphing Data by William S. Cleveland (1985)
Bayesian First Aid: Two Sample ttest
As spring follows winter once more here down in southern Sweden, the two sample ttest follows the one sample ttest. This is a continuation of the Bayesian First Aid alternative to the one sample ttest where I’ll introduce the two sample alternative. It will be a quite short post as the two sample alternative is just more of the one sample alternative, more of using John K. Kruschke’s BEST model, and more of the coffee yield data from the 2002 Nature article The Value of Bees to the Coffee Harvest.
A Significantly Improved Significance Test. Not!
It is my great pleasure to share with you a breakthrough in statistical computing. There are many statistical tests: the ttest, the chisquared test, the ANOVA, etc. I here present a new test, a test that answers the question researchers are most anxious to figure out, a test of significance, the significance test. While a test like the two sample ttest tests the null hypothesis that the means of two populations are equal the significance test does not tiptoe around the canoe. It jumps right in, paddle in hand, and directly tests whether a result is significant or not.
The significance test has been implemented in R as signif.test
and is ready to be source
d and run. While other statistical procedures bombards you with useless information such as parameter estimates and confidence intervals signif.test
only reports what truly matters, the one value, the pvale.
For your convenience signif.test
can be called exactly like t.test
and will return the same pvalue in order to facilitate pvalue comparison with already published studies. Let me show you how signif.test
works through a couple of examples using a dataset from the RANDOM.ORG database: