How to merge upstream changes into a local Git repository.
In this post, we will discuss the algorithm that produces point estimates given inputs and in linear regression. After deriving the function producing , I’ll briefly show how to implement this procedure in yet another compiled language, Fortran, and how to call it from within R.
In a previous post, I presented the main ideas of the Metropolis-Hastings algorithm, and showed how one may implement a random-walk Metropolis and an independent MH sampler in R. Running them however took some time until completion, hence, in this post, I’ll show how to speed things up by implementing these very algorithms in C.
Summarizing data graphically allows to have a quick glance at data of interest. For instance, we may want to assess whether a variable’s mean differs accross different groups. Stata allows to do that easily with grouped bar plots: graph bar dep_var, over(groups). However, to perform statistical inference about group differences, we require a measure of uncertainty (e.g. confidence intervals) around the group means. Without uncertainty measures, we cannot be sure whether a difference in means is indeed due to groups or rather a result of sampling error.
Including external and up-to-date C libraries for compilation under Windows can sometimes be a bit of a challenge, as they may be outdated or not available in binary form. For libraries under a free and/or open source license (e.g. GPL, MIT), there is at least the possibility to build them on your own computer.
In contrast to OLS and MLE, where parameter estimation essentially boils down to solving something along the lines of , or respectively searching for such that the likelihood is largest, estimating Bayesian models may at first feel like resorting to a black box which mysteriously produces posterior densities . In an attempt to lift this veil of mystery, we will take a look at the Metropolis-Hastings sampler, one of those black boxes used to estimate posterior densities.