By Wayne
There are several reasons why everyone isn’t using Bayesian methods for regression modeling. One reason is that Bayesian modeling requires more thought: you need pesky things like priors, and you can’t assume that if a procedure runs without throwing an error that the answers are valid. A second reason is that MCMC sampling — the bedrock of practical Bayesian modeling — can be slow compared to closed-form or MLE procedures. A third reason is that existing Bayesian solutions have either been highly-specialized (and thus inflexible), or have required knowing how to use a generalized tool like BUGS, JAGS, or Stan. This third reason has recently been shattered in the R world by not one but two packages: brms
and rstanarm
. Interestingly, both of these packages are elegant front ends to Stan, via rstan
and shinystan
.
This article describes brms
and rstanarm
, how they help you, and how they differ.
You can install both packages from CRAN, making sure to install dependencies so you get rstan
, Rcpp
, and shinystan
as well. If you like having the latest development versions — which may have a few bug fixes that the CRAN versions don’t yet have — you can use devtools
to install them following instructions at the brms
github site or the rstanarm
github site.
The brms
package
Let’s start with a quick multinomial logistic regression with the famous Iris dataset, using brms
. You may want to skip the actual brm
call, below, because it’s so slow (we’ll fix that in the next step):
library (brms) rstan_options (auto_write=TRUE) options (mc.cores=parallel::detectCores ()) # Run on multiple cores set.seed (3875) ir
First, note that the brm
call looks like glm
or other standard regression functions. Second, I advised you not to run the brm
because on my couple-of-year-old Macbook Pro, it takes about 12 minutes to run. Why so long? …read more
Source:: r-bloggers.com