By xi’an
(This article was originally published at Xi’an’s Og » R, and syndicated at StatsBlogs.)
[Here are some comments sent to me by Aki Vehtari in the sequel of the previous posts.]
The following is mostly based on our arXived paper with Andrew Gelman and the references mentioned there.
Koopman, Shephard, and Creal (2009) proposed to make a sample based estimate of the existence of the moments using generalized Pareto distribution fitted to the tail of the weight distribution. The number of existing moments is less than 1/k (when k>0), where k is the shape parameter of generalized Pareto distribution.
When k
In the example with “Exp(1) proposal for an Exp(1/2) target”, k=1/2 and we are truly on the border.
In our experiments in the arXived paper and in Vehtari, Gelman, and Gabry (2015), we have observed that Pareto smoothed importance sampling (PSIS) usually converges well also with k>1/2 but k close to 1/2 (let’s say k0.7) the convergence is much worse and both naïve importance sampling and PSIS are unreliable.
Two figures are attached, which show the results comparing IS and PSIS in the Exp(1/2) and Exp(1/10) examples. The results were computed with repeating 1000 times a simulation with 10000 samples in each. We can see the bad performance of IS in both examples as you also illustrated. In Exp(1/2) case, PSIS is also to produce much more stable results. In Exp(1/10) case, PSIS is able to reduce the variance of the estimate, but it is not enough to avoid a big bias.
It would be interesting to have more theoretical justification why infinite variance is not so big problem if k is close to 1/2 (e.g. how the convergence rate is related to the amount of fractional moments).
I guess that max ω[t] / ∑ ω[t] in Chaterjee and Diaconis has some …read more
Source:: statsblogs.com