6 research outputs found
Precise Phase Transition of Total Variation Minimization
Characterizing the phase transitions of convex optimizations in recovering
structured signals or data is of central importance in compressed sensing,
machine learning and statistics. The phase transitions of many convex
optimization signal recovery methods such as minimization and nuclear
norm minimization are well understood through recent years' research. However,
rigorously characterizing the phase transition of total variation (TV)
minimization in recovering sparse-gradient signal is still open. In this paper,
we fully characterize the phase transition curve of the TV minimization. Our
proof builds on Donoho, Johnstone and Montanari's conjectured phase transition
curve for the TV approximate message passing algorithm (AMP), together with the
linkage between the minmax Mean Square Error of a denoising problem and the
high-dimensional convex geometry for TV minimization.Comment: 6 page
On the Error in Phase Transition Computations for Compressed Sensing
Evaluating the statistical dimension is a common tool to determine the
asymptotic phase transition in compressed sensing problems with Gaussian
ensemble. Unfortunately, the exact evaluation of the statistical dimension is
very difficult and it has become standard to replace it with an upper-bound. To
ensure that this technique is suitable, [1] has introduced an upper-bound on
the gap between the statistical dimension and its approximation. In this work,
we first show that the error bound in [1] in some low-dimensional models such
as total variation and analysis minimization becomes poorly large.
Next, we develop a new error bound which significantly improves the estimation
gap compared to [1]. In particular, unlike the bound in [1] that is not
applicable to settings with overcomplete dictionaries, our bound exhibits a
decaying behavior in such cases
Effective Condition Number Bounds for Convex Regularization
We derive bounds relating Renegar's condition number to quantities that
govern the statistical performance of convex regularization in settings that
include the -analysis setting. Using results from conic integral
geometry, we show that the bounds can be made to depend only on a random
projection, or restriction, of the analysis operator to a lower dimensional
space, and can still be effective if these operators are ill-conditioned. As an
application, we get new bounds for the undersampling phase transition of
composite convex regularizers. Key tools in the analysis are Slepian's
inequality and the kinematic formula from integral geometry.Comment: 17 pages, 4 figures . arXiv admin note: text overlap with
arXiv:1408.301
-Analysis Minimization and Generalized (Co-)Sparsity: When Does Recovery Succeed?
This paper investigates the problem of signal estimation from undersampled
noisy sub-Gaussian measurements under the assumption of a cosparse model. Based
on generalized notions of sparsity, we derive novel recovery guarantees for the
-analysis basis pursuit, enabling highly accurate predictions of its
sample complexity. The corresponding bounds on the number of required
measurements do explicitly depend on the Gram matrix of the analysis operator
and therefore particularly account for its mutual coherence structure. Our
findings defy conventional wisdom which promotes the sparsity of analysis
coefficients as the crucial quantity to study. In fact, this common paradigm
breaks down completely in many situations of practical interest, for instance,
when applying a redundant (multilevel) frame as analysis prior. By extensive
numerical experiments, we demonstrate that, in contrast, our theoretical
sampling-rate bounds reliably capture the recovery capability of various
examples, such as redundant Haar wavelets systems, total variation, or random
frames. The proofs of our main results build upon recent achievements in the
convex geometry of data mining problems. More precisely, we establish a
sophisticated upper bound on the conic Gaussian mean width that is associated
with the underlying -analysis polytope. Due to a novel localization
argument, it turns out that the presented framework naturally extends to stable
recovery, allowing us to incorporate compressible coefficient sequences as
well