6 research outputs found
High-dimensional change-point estimation: Combining filtering with convex optimization
We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional change-point estimation that combines the filtered derivative approach from previous work with convex optimization methods based on atomic norm regularization, which are useful for exploiting structure in high-dimensional data. Our algorithm is applicable in online settings as it operates on small portions of the sequence of observations at a time, and it is well-suited to the high-dimensional setting both in terms of computational scalability and of statistical efficiency. The main result of this paper shows that our method performs change-point estimation reliably as long as the product of the smallest-sized change (the Euclidean-norm-squared of the difference between signals at a change-point) and the smallest distance between change-points (number of time instances) is larger than a Gaussian width parameter that characterizes the low-dimensional complexity of the underlying signal sequence. A full version of this paper is available online [1]
High-dimensional change-point estimation: Combining filtering with convex optimization
We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional change-point estimation that combines the filtered derivative approach from previous work with convex optimization methods based on atomic norm regularization, which are useful for exploiting structure in high-dimensional data. Our algorithm is applicable in online settings as it operates on small portions of the sequence of observations at a time, and it is well-suited to the high-dimensional setting both in terms of computational scalability and of statistical efficiency. The main result of this paper shows that our method performs change-point estimation reliably as long as the product of the smallest-sized change (the Euclidean-norm-squared of the difference between signals at a change-point) and the smallest distance between change-points (number of time instances) is larger than a Gaussian width parameter that characterizes the low-dimensional complexity of the underlying signal sequence
The Squared-Error of Generalized LASSO: A Precise Analysis
We consider the problem of estimating an unknown signal from noisy
linear observations . In many practical instances,
has a certain structure that can be captured by a structure inducing convex
function . For example, norm can be used to encourage a
sparse solution. To estimate with the aid of , we consider the
well-known LASSO method and provide sharp characterization of its performance.
We assume the entries of the measurement matrix and the noise vector
have zero-mean normal distributions with variances and
respectively. For the LASSO estimator , we attempt to calculate the
Normalized Square Error (NSE) defined as as
a function of the noise level , the number of observations and the
structure of the signal. We show that, the structure of the signal and
choice of the function enter the error formulae through the summary
parameters and , which are defined as the Gaussian
squared-distances to the subdifferential cone and to the -scaled
subdifferential, respectively. The first LASSO estimator assumes a-priori
knowledge of and is given by . We prove that its worst case NSE is achieved when
and concentrates around .
Secondly, we consider , for some
. This time the NSE formula depends on the choice of
and is given by . We then establish a mapping
between this and the third estimator . Finally, for a number of important structured signal classes,
we translate our abstract formulae to closed-form upper bounds on the NSE
Sharp MSE Bounds for Proximal Denoising
Denoising has to do with estimating a signal x_0 from its noisy observations y = x_0 + z. In this paper, we focus on the “structured denoising problem,” where the signal x_0 possesses a certain structure and z has independent normally distributed entries with mean zero and variance σ^2. We employ a structure-inducing convex function f(⋅) and solve min_x {1/2 ∥y−x∥^2_2 +σλf(x)}to estimate x_0, for some λ>0. Common choices for f(⋅) include the ℓ_1 norm for sparse vectors, the ℓ_1 −ℓ_2 norm for block-sparse signals and the nuclear norm for low-rank matrices. The metric we use to evaluate the performance of an estimate x∗ is the normalized mean-squared error NMSE(σ)=E∥x∗ − x_0∥^2_2/σ^2. We show that NMSE is maximized as σ→0 and we find the exact worst-case NMSE, which has a simple geometric interpretation: the mean-squared distance of a standard normal vector to the λ-scaled subdifferential λ∂f(x_0). When λ is optimally tuned to minimize the worst-case NMSE, our results can be related to the constrained denoising problem min_(f(x)≤f(x_0)){∥y−x∥2}. The paper also connects these results to the generalized LASSO problem, in which one solves min_(f(x)≤f(x_0)){∥y−Ax∥2} to estimate x_0 from noisy linear observations y=Ax_0 + z. We show that certain properties of the LASSO problem are closely related to the denoising problem. In particular, we characterize the normalized LASSO cost and show that it exhibits a “phase transition” as a function of number of observations. We also provide an order-optimal bound for the LASSO error in terms of the mean-squared distance. Our results are significant in two ways. First, we find a simple formula for the performance of a general convex estimator. Secondly, we establish a connection between the denoising and linear inverse problems
On a Relation between the Minimax Risk and the Phase Transitions of Compressed Recovery
This paper provides a sharp analysis of the optimally tuned denoising problem and establishes a relation between the estimation error (minimax risk) and phase transition for compressed sensing recovery using convex and continuous functions. Phase transitions deal with recovering a signal xo from compressed linear observations Ax_0 by minimizing a certain convex function f(·). On the other hand, denoising is the problem of estimating a signal x_0 from noisy observations y = x_0+z using the regularization min_x λ/f(x) + 1/2∥y-x∥_2^2. In general, these problems are more meaningful and useful when the signal x_0 has a certain structure and the function f(·) is chosen to exploit this structure. Examples include, l_1 and l_1 - l_2 norms for sparse and block sparse vectors and nuclear norm for low rank matrices. In this work, we carefully analyze the minimax denoising problem and relate our results to the phase transition performance under a considerably general setting where the measurement A in compressed recovery and the noise z in the denoising problem are iid Gaussian random variables. Our results suggest that the required number of observations to recover a compressed signal is closely related to the asymptotic variance of the optimal estimation error. This relation was first empirically noted in [9]. Here we provide a rigorous foundation