1,321 research outputs found
Verifiable conditions of -recovery of sparse signals with sign restrictions
We propose necessary and sufficient conditions for a sensing matrix to be
"s-semigood" -- to allow for exact -recovery of sparse signals with at
most nonzero entries under sign restrictions on part of the entries. We
express the error bounds for imperfect -recovery in terms of the
characteristics underlying these conditions. Furthermore, we demonstrate that
these characteristics, although difficult to evaluate, lead to verifiable
sufficient conditions for exact sparse -recovery and to efficiently
computable upper bounds on those for which a given sensing matrix is
-semigood. We concentrate on the properties of proposed verifiable
sufficient conditions of -semigoodness and describe their limits of
performance
Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
Regularization of ill-posed linear inverse problems via penalization
has been proposed for cases where the solution is known to be (almost) sparse.
One way to obtain the minimizer of such an penalized functional is via
an iterative soft-thresholding algorithm. We propose an alternative
implementation to -constraints, using a gradient method, with
projection on -balls. The corresponding algorithm uses again iterative
soft-thresholding, now with a variable thresholding parameter. We also propose
accelerated versions of this iterative method, using ingredients of the
(linear) steepest descent method. We prove convergence in norm for one of these
projected gradient methods, without and with acceleration.Comment: 24 pages, 5 figures. v2: added reference, some amendments, 27 page
Estimating point-to-point and point-to-multipoint traffic matrices: An information-theoretic approach
© 2005 IEEE.Traffic matrices are required inputs for many IP network management tasks, such as capacity planning, traffic engineering, and network reliability analysis. However, it is difficult to measure these matrices directly in large operational IP networks, so there has been recent interest in inferring traffic matrices from link measurements and other more easily measured data. Typically, this inference problem is ill-posed, as it involves significantly more unknowns than data. Experience in many scientific and engineering fields has shown that it is essential to approach such ill-posed problems via "regularization". This paper presents a new approach to traffic matrix estimation using a regularization based on "entropy penalization". Our solution chooses the traffic matrix consistent with the measured data that is information-theoretically closest to a model in which source/destination pairs are stochastically independent. It applies to both point-to-point and point-to-multipoint traffic matrix estimation. We use fast algorithms based on modern convex optimization theory to solve for our traffic matrices. We evaluate our algorithm with real backbone traffic and routing data, and demonstrate that it is fast, accurate, robust, and flexible.Yin Zhang, Member, Matthew Roughan, Carsten Lund, and David L. Donoh
Recommended from our members
About SparseLab
Changes and Enhancements for Release 2.0: 4 papers have been added to SparseLab 200: "Fast Solution of l1-norm Minimization Problems When the Solutions May be Sparse"; "Why Simple Shrinkage is Still Relevant For Redundant Representations"; "Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise"; "On the Stability of Basis Pursuit in the Presence of Noise." SparseLab is a library of Matlab routines for finding sparse solutions to underdetermined systems. The library is available free of charge over the Internet. Versions are provided for Macintosh, UNIX and Windows machines. Downloading and installation instructions are given here. SparseLab has over 400 .m files which are documented, indexed and cross-referenced in various ways. In this document we suggest several ways to get started using SparseLab: (a) trying out the pedagogical examples, (b) running the demonstrations, which illustrate the use of SparseLab in published papers, and (c) browsing the extensive collection of source files, which are self-documenting. SparseLab makes available, in one package, all the code to reproduce all the figures in the included published articles. The interested reader can inspect the source code to see exactly what algorithms were used, and how parameters were set in producing our figures, and can then modify the source to produce variations on our results. SparseLab has been developed, in part, because of exhortations by Jon Claerbout of Stanford that computational scientists should engage in "really reproducible" research. This document helps with installation and getting started, as well as describing the philosophy, limitations and rules of the road for this software
Multiscale Representations for Manifold-Valued Data
We describe multiscale representations for data observed on equispaced grids and taking values in manifolds such as the sphere , the special orthogonal group , the positive definite matrices , and the Grassmann manifolds . The representations are based on the deployment of Deslauriers--Dubuc and average-interpolating pyramids "in the tangent plane" of such manifolds, using the and maps of those manifolds. The representations provide "wavelet coefficients" which can be thresholded, quantized, and scaled in much the same way as traditional wavelet coefficients. Tasks such as compression, noise removal, contrast enhancement, and stochastic simulation are facilitated by this representation. The approach applies to general manifolds but is particularly suited to the manifolds we consider, i.e., Riemannian symmetric spaces, such as , , , where the and maps are effectively computable. Applications to manifold-valued data sources of a geometric nature (motion, orientation, diffusion) seem particularly immediate. A software toolbox, SymmLab, can reproduce the results discussed in this paper
Recommended from our members
SparseLab Architecture
Changes and Enhancements for Release 2.0: 4 papers have been added to SparseLab 2.0: "Fast Solution of l1-norm Minimization Problems When the Solutions May be Sparse"; "Why Simple Shrinkage is Still Relevant For Redundant Representations"; "Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise"; "On the Stability of Basis Pursuit in the Presence of Noise." This document describes the architecture of SparseLab version 2.0. It is designed for users who already have had day-to-day interaction with the package and now need specific details about the architecture of the package, for example to modify components for their own research
Necessary and sufficient conditions of solution uniqueness in minimization
This paper shows that the solutions to various convex minimization
problems are \emph{unique} if and only if a common set of conditions are
satisfied. This result applies broadly to the basis pursuit model, basis
pursuit denoising model, Lasso model, as well as other models that
either minimize or impose the constraint , where
is a strictly convex function. For these models, this paper proves that,
given a solution and defining I=\supp(x^*) and s=\sign(x^*_I),
is the unique solution if and only if has full column rank and there
exists such that and for . This
condition is previously known to be sufficient for the basis pursuit model to
have a unique solution supported on . Indeed, it is also necessary, and
applies to a variety of other models. The paper also discusses ways to
recognize unique solutions and verify the uniqueness conditions numerically.Comment: 6 pages; revised version; submitte
The determination of shock ramp width using the noncoplanar magnetic field component
We determine a simple expression for the ramp width of a collisionless fast
shock, based upon the relationship between the noncoplanar and main magnetic
field components. By comparing this predicted width with that measured during
an observation of a shock, the shock velocity can be determined from a single
spacecraft. For a range of low-Mach, low-beta bow shock observations made by
the ISEE-1 and -2 spacecraft, ramp widths determined from two-spacecraft
comparison and from this noncoplanar component relationship agree within 30%.
When two-spacecraft measurements are not available or are inefficient, this
technique provides a reasonable estimation of scale size for low-Mach shocks.Comment: 6 pages, LaTeX (aguplus + agutex);
packages:amsmath,times,graphicx,float, psfrag,verbatim; 3 postscript figures
called by the file; submitted to Geophys. Res. Let
- …