Quick Google search reveals ... I'm not sure what. From this post on the Stan Google group, the following quotes are pertinent in answer to a question about (I thought) using HMC together with another type of sampler for one of the parameters. It seems the questioner has either a hierarchical or discrete model (both bad for Stan) and that in the later quotes it appears I could alternate Stan with RWMetropolis. Read on...
I have what is essentially a Metropolis-within-Gibbs sampler. One or two of the parameters have unconventional steps in their sampling. (I'm talking about my expansion to Challis Schmidler 2012 which used a library of proposals to improve mixing on a couple of the parameters). I suspect I'm seeing poor mixing in some of the other parameters, so I wondered if I could use Stan for those parameters and keep the library-based sampler as well.
Quick Google search reveals ... I'm not sure what. From this post on the Stan Google group, the following quotes are pertinent in answer to a question about (I thought) using HMC together with another type of sampler for one of the parameters. It seems the questioner has either a hierarchical or discrete model (both bad for Stan) and that in the later quotes it appears I could alternate Stan with RWMetropolis. Read on...
0 Comments
I was looking into the use of Stan for Hamiltonian Monte Carlo. On page 23 of the Stan reference (stan-reference-2.9.0.pdf), I found this excellent and brief summary of HMC: HMC accelerates both convergence to the stationary distribution and subsequent parameter exploration by using the gradient of the log probability function. The un- known quantity vector θ is interpreted as the position of a fictional particle. Each iteration generates a random momentum and simulates the path of the particle with potential energy determined [by] the (negative) log probability function. Hamilton’s decom- position shows that the gradient of this potential determines change in momentum and the momentum determines the change in position. These continuous changes over time are approximated using the leapfrog algorithm, which breaks the time into discrete steps which are easily simulated. A Metropolis reject step is then applied to correct for any simulation error and ensure detailed balance of the resulting Markov chain transitions (Metropolis et al., 1953; Hastings, 1970). Immediately after that, the tuning parameters are discussed: Basic Euclidean Hamiltonian Monte Carlo involves three “tuning” parameters to which its behavior is quite sensitive. Stan’s samplers allow these parameters to be set by hand or set automatically without user intervention. |
Categories
All
Archives
December 2016
AboutThis blog is mainly for statistics, R, or Duke-related stuff that is not directly relating to research activity. |