Bayesian Inference for kk-Monotone Densities with Applications to Multiple Testing

Abstract

Shape restriction, like monotonicity or convexity, imposed on a function of interest, such as a regression or density function, allows for its estimation without smoothness assumptions. The concept of kk-monotonicity encompasses a family of shape restrictions, including decreasing and convex decreasing as special cases corresponding to k=1k=1 and k=2k=2. We consider Bayesian approaches to estimate a kk-monotone density. By utilizing a kernel mixture representation and putting a Dirichlet process or a finite mixture prior on the mixing distribution, we show that the posterior contraction rate in the Hellinger distance is (n/log⁑n)βˆ’k/(2k+1)(n/\log n)^{- k/(2k + 1)} for a kk-monotone density, which is minimax optimal up to a polylogarithmic factor. When the true kk-monotone density is a finite J0J_0-component mixture of the kernel, the contraction rate improves to the nearly parametric rate (J0log⁑n)/n\sqrt{(J_0 \log n)/n}. Moreover, by putting a prior on kk, we show that the same rates hold even when the best value of kk is unknown. A specific application in modeling the density of pp-values in a large-scale multiple testing problem is considered. Simulation studies are conducted to evaluate the performance of the proposed method

    Similar works

    Full text

    thumbnail-image

    Available Versions