129,060 research outputs found
Approximate Sparsity Pattern Recovery: Information-Theoretic Lower Bounds
Recovery of the sparsity pattern (or support) of an unknown sparse vector
from a small number of noisy linear measurements is an important problem in
compressed sensing. In this paper, the high-dimensional setting is considered.
It is shown that if the measurement rate and per-sample signal-to-noise ratio
(SNR) are finite constants independent of the length of the vector, then the
optimal sparsity pattern estimate will have a constant fraction of errors.
Lower bounds on the measurement rate needed to attain a desired fraction of
errors are given in terms of the SNR and various key parameters of the unknown
vector. The tightness of the bounds in a scaling sense, as a function of the
SNR and the fraction of errors, is established by comparison with existing
achievable bounds. Near optimality is shown for a wide variety of practically
motivated signal models
An Information-Theoretic Analysis of Thompson Sampling
We provide an information-theoretic analysis of Thompson sampling that
applies across a broad range of online optimization problems in which a
decision-maker must learn from partial feedback. This analysis inherits the
simplicity and elegance of information theory and leads to regret bounds that
scale with the entropy of the optimal-action distribution. This strengthens
preexisting results and yields new insight into how information improves
performance
On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori Perturbations
In this paper we describe how MAP inference can be used to sample efficiently
from Gibbs distributions. Specifically, we provide means for drawing either
approximate or unbiased samples from Gibbs' distributions by introducing low
dimensional perturbations and solving the corresponding MAP assignments. Our
approach also leads to new ways to derive lower bounds on partition functions.
We demonstrate empirically that our method excels in the typical "high signal -
high coupling" regime. The setting results in ragged energy landscapes that are
challenging for alternative approaches to sampling and/or lower bounds
Matrix Completion via Max-Norm Constrained Optimization
Matrix completion has been well studied under the uniform sampling model and
the trace-norm regularized methods perform well both theoretically and
numerically in such a setting. However, the uniform sampling model is
unrealistic for a range of applications and the standard trace-norm relaxation
can behave very poorly when the underlying sampling scheme is non-uniform.
In this paper we propose and analyze a max-norm constrained empirical risk
minimization method for noisy matrix completion under a general sampling model.
The optimal rate of convergence is established under the Frobenius norm loss in
the context of approximately low-rank matrix reconstruction. It is shown that
the max-norm constrained method is minimax rate-optimal and yields a unified
and robust approximate recovery guarantee, with respect to the sampling
distributions. The computational effectiveness of this method is also
discussed, based on first-order algorithms for solving convex optimizations
involving max-norm regularization.Comment: 33 page
Lower Bounds for Oblivious Near-Neighbor Search
We prove an lower bound on the dynamic
cell-probe complexity of statistically
approximate-near-neighbor search () over the -dimensional
Hamming cube. For the natural setting of , our result
implies an lower bound, which is a quadratic
improvement over the highest (non-oblivious) cell-probe lower bound for
. This is the first super-logarithmic
lower bound for against general (non black-box) data structures.
We also show that any oblivious data structure for
decomposable search problems (like ) can be obliviously dynamized
with overhead in update and query time, strengthening a classic
result of Bentley and Saxe (Algorithmica, 1980).Comment: 28 page
- …