The first Bayesian results for the sparse normal means problem were proven
for spike-and-slab priors. However, these priors are less convenient from a
computational point of view. In the meanwhile, a large number of continuous
shrinkage priors has been proposed. Many of these shrinkage priors can be
written as a scale mixture of normals, which makes them particularly easy to
implement. We propose general conditions on the prior on the local variance in
scale mixtures of normals, such that posterior contraction at the minimax rate
is assured. The conditions require tails at least as heavy as Laplace, but not
too heavy, and a large amount of mass around zero relative to the tails, more
so as the sparsity increases. These conditions give some general guidelines for
choosing a shrinkage prior for estimation under a nearly black sparsity
assumption. We verify these conditions for the class of priors considered by
Ghosh and Chakrabarti (2015), which includes the horseshoe and the
normal-exponential gamma priors, and for the horseshoe+, the inverse-Gaussian
prior, the normal-gamma prior, and the spike-and-slab Lasso, and thus extend
the number of shrinkage priors which are known to lead to posterior contraction
at the minimax estimation rate