3 research outputs found
Lower Bounds on the Bayesian Risk via Information Measures
This paper focuses on parameter estimation and introduces a new method for
lower bounding the Bayesian risk. The method allows for the use of virtually
\emph{any} information measure, including R\'enyi's ,
-Divergences, and Sibson's -Mutual Information. The approach
considers divergences as functionals of measures and exploits the duality
between spaces of measures and spaces of functions. In particular, we show that
one can lower bound the risk with any information measure by upper bounding its
dual via Markov's inequality. We are thus able to provide estimator-independent
impossibility results thanks to the Data-Processing Inequalities that
divergences satisfy. The results are then applied to settings of interest
involving both discrete and continuous parameters, including the
``Hide-and-Seek'' problem, and compared to the state-of-the-art techniques. An
important observation is that the behaviour of the lower bound in the number of
samples is influenced by the choice of the information measure. We leverage
this by introducing a new divergence inspired by the ``Hockey-Stick''
Divergence, which is demonstrated empirically to provide the largest
lower-bound across all considered settings. If the observations are subject to
privatisation, stronger impossibility results can be obtained via Strong
Data-Processing Inequalities. The paper also discusses some generalisations and
alternative directions
Lower-bounds on the Bayesian Risk in Estimation Procedures via -Divergences
We consider the problem of parameter estimation in a Bayesian setting and
propose a general lower-bound that includes part of the family of
-Divergences. The results are then applied to specific settings of interest
and compared to other notable results in the literature. In particular, we show
that the known bounds using Mutual Information can be improved by using, for
example, Maximal Leakage, Hellinger divergence, or generalizations of the
Hockey-Stick divergence.Comment: Submitted to ISIT 202
The Houdayer Algorithm: Overview, Extensions, and Applications
The study of spin systems with disorder and frustration is known to be a
computationally hard task. Standard heuristics developed for optimizing and
sampling from general Ising Hamiltonians tend to produce correlated solutions
due to their locality, resulting in a suboptimal exploration of the search
space. To mitigate these effects, cluster Monte-Carlo methods are often
employed as they provide ways to perform non-local transformations on the
system. In this work, we investigate the Houdayer algorithm, a cluster
Monte-Carlo method with small numerical overhead which improves the exploration
of configurations by preserving the energy of the system. We propose a
generalization capable of reaching exponentially many configurations at the
same energy, while offering a high level of adaptability to ensure that no
biased choice is made. We discuss its applicability in various contexts,
including Markov chain Monte-Carlo sampling and as part of a genetic algorithm.
The performance of our generalization in these settings is illustrated by
sampling for the Ising model across different graph connectivities and by
solving instances of well-known binary optimization problems. We expect our
results to be of theoretical and practical relevance in the study of spin
glasses but also more broadly in discrete optimization, where a multitude of
problems follow the structure of Ising spin systems.Comment: 24 pages, 9 figure