338 research outputs found
Paved with Good Intentions: Analysis of a Randomized Block Kaczmarz Method
The block Kaczmarz method is an iterative scheme for solving overdetermined
least-squares problems. At each step, the algorithm projects the current
iterate onto the solution space of a subset of the constraints. This paper
describes a block Kaczmarz algorithm that uses a randomized control scheme to
choose the subset at each step. This algorithm is the first block Kaczmarz
method with an (expected) linear rate of convergence that can be expressed in
terms of the geometric properties of the matrix and its submatrices. The
analysis reveals that the algorithm is most effective when it is given a good
row paving of the matrix, a partition of the rows into well-conditioned blocks.
The operator theory literature provides detailed information about the
existence and construction of good row pavings. Together, these results yield
an efficient block Kaczmarz scheme that applies to many overdetermined
least-squares problem
Mixed Operators in Compressed Sensing
Applications of compressed sensing motivate the possibility of using
different operators to encode and decode a signal of interest. Since it is
clear that the operators cannot be too different, we can view the discrepancy
between the two matrices as a perturbation. The stability of L1-minimization
and greedy algorithms to recover the signal in the presence of additive noise
is by now well-known. Recently however, work has been done to analyze these
methods with noise in the measurement matrix, which generates a multiplicative
noise term. This new framework of generalized perturbations (i.e., both
additive and multiplicative noise) extends the prior work on stable signal
recovery from incomplete and inaccurate measurements of Candes, Romberg and Tao
using Basis Pursuit (BP), and of Needell and Tropp using Compressive Sampling
Matching Pursuit (CoSaMP). We show, under reasonable assumptions, that the
stability of the reconstructed signal by both BP and CoSaMP is limited by the
noise level in the observation. Our analysis extends easily to arbitrary greedy
methods.Comment: CISS 2010 (44th Annual Conference on Information Sciences and
Systems
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
Compressive sampling offers a new paradigm for acquiring signals that are
compressible with respect to an orthonormal basis. The major algorithmic
challenge in compressive sampling is to approximate a compressible signal from
noisy samples. This paper describes a new iterative recovery algorithm called
CoSaMP that delivers the same guarantees as the best optimization-based
approaches. Moreover, this algorithm offers rigorous bounds on computational
cost and storage. It is likely to be extremely efficient for practical problems
because it requires only matrix-vector multiplies with the sampling matrix. For
many cases of interest, the running time is just O(N*log^2(N)), where N is the
length of the signal.Comment: 30 pages. Revised. Presented at Information Theory and Applications,
31 January 2008, San Dieg
Signal Space CoSaMP for Sparse Recovery with Redundant Dictionaries
Compressive sensing (CS) has recently emerged as a powerful framework for
acquiring sparse signals. The bulk of the CS literature has focused on the case
where the acquired signal has a sparse or compressible representation in an
orthonormal basis. In practice, however, there are many signals that cannot be
sparsely represented or approximated using an orthonormal basis, but that do
have sparse representations in a redundant dictionary. Standard results in CS
can sometimes be extended to handle this case provided that the dictionary is
sufficiently incoherent or well-conditioned, but these approaches fail to
address the case of a truly redundant or overcomplete dictionary. In this paper
we describe a variant of the iterative recovery algorithm CoSaMP for this more
challenging setting. We utilize the D-RIP, a condition on the sensing matrix
analogous to the well-known restricted isometry property. In contrast to prior
work, the method and analysis are "signal-focused"; that is, they are oriented
around recovering the signal rather than its dictionary coefficients. Under the
assumption that we have a near-optimal scheme for projecting vectors in signal
space onto the model family of candidate sparse signals, we provide provable
recovery guarantees. Developing a practical algorithm that can provably compute
the required near-optimal projections remains a significant open problem, but
we include simulation results using various heuristics that empirically exhibit
superior performance to traditional recovery algorithms
California's Most Vulnerable Parents: When Maltreated Children Have Children
This report takes an in-depth look at the intersection between teen births, child maltreatment, and involvement with the child protection system. Putnam-Hornstein, along with other researchers at USC and the University of California, Berkeley, linked and then analyzed roughly 1.5 million California birth records and 1 million CPS records, with a second phase of research focusing on the maltreatment risk of children born to adolescent mothers.In 2012, California became one of the first states in the nation to extend foster youth status until age 21. Different programs and services will likely be required to adequately respond to the needs and circumstances of non-minor youth who remain in the foster care system, particularly in the area of parenting supports. This report finds that as many as one in three female youth in California may be parenting by the time they exit the foster care system on their 21st birthday
Greedy Signal Recovery Review
The two major approaches to sparse recovery are L1-minimization and greedy
methods. Recently, Needell and Vershynin developed Regularized Orthogonal
Matching Pursuit (ROMP) that has bridged the gap between these two approaches.
ROMP is the first stable greedy algorithm providing uniform guarantees.
Even more recently, Needell and Tropp developed the stable greedy algorithm
Compressive Sampling Matching Pursuit (CoSaMP). CoSaMP provides uniform
guarantees and improves upon the stability bounds and RIC requirements of ROMP.
CoSaMP offers rigorous bounds on computational cost and storage. In many cases,
the running time is just O(NlogN), where N is the ambient dimension of the
signal. This review summarizes these major advances
A Sampling Kaczmarz-Motzkin Algorithm for Linear Feasibility
We combine two iterative algorithms for solving large-scale systems of linear inequalities, the relaxation method of Agmon, Motzkin et al. and the randomized Kaczmarz method. We obtain a family of algorithms that generalize and extend both projection-based techniques. We prove several convergence results, and our computational experiments show our algorithms often outperform the original methods
- …