15,869 research outputs found
Oracle-order Recovery Performance of Greedy Pursuits with Replacement against General Perturbations
Applying the theory of compressive sensing in practice always takes different
kinds of perturbations into consideration. In this paper, the recovery
performance of greedy pursuits with replacement for sparse recovery is analyzed
when both the measurement vector and the sensing matrix are contaminated with
additive perturbations. Specifically, greedy pursuits with replacement include
three algorithms, compressive sampling matching pursuit (CoSaMP), subspace
pursuit (SP), and iterative hard thresholding (IHT), where the support
estimation is evaluated and updated in each iteration. Based on restricted
isometry property, a unified form of the error bounds of these recovery
algorithms is derived under general perturbations for compressible signals. The
results reveal that the recovery performance is stable against both
perturbations. In addition, these bounds are compared with that of oracle
recovery--- least squares solution with the locations of some largest entries
in magnitude known a priori. The comparison shows that the error bounds of
these algorithms only differ in coefficients from the lower bound of oracle
recovery for some certain signal and perturbations, as reveals that
oracle-order recovery performance of greedy pursuits with replacement is
guaranteed. Numerical simulations are performed to verify the conclusions.Comment: 27 pages, 4 figures, 5 table
An optimal bifactor approximation algorithm for the metric uncapacitated facility location problem
We obtain a 1.5-approximation algorithm for the metric uncapacitated facility
location problem (UFL), which improves on the previously best known
1.52-approximation algorithm by Mahdian, Ye and Zhang. Note, that the
approximability lower bound by Guha and Khuller is 1.463.
An algorithm is a {\em (,)-approximation algorithm} if
the solution it produces has total cost at most , where and are the facility and the connection
cost of an optimal solution. Our new algorithm, which is a modification of the
-approximation algorithm of Chudak and Shmoys, is a
(1.6774,1.3738)-approximation algorithm for the UFL problem and is the first
one that touches the approximability limit curve
established by Jain, Mahdian and Saberi. As a consequence, we obtain the first
optimal approximation algorithm for instances dominated by connection costs.
When combined with a (1.11,1.7764)-approximation algorithm proposed by Jain et
al., and later analyzed by Mahdian et al., we obtain the overall approximation
guarantee of 1.5 for the metric UFL problem. We also describe how to use our
algorithm to improve the approximation ratio for the 3-level version of UFL.Comment: A journal versio
Stochastic subspace correction in Hilbert space
We consider an incremental approximation method for solving variational
problems in infinite-dimensional Hilbert spaces, where in each step a randomly
and independently selected subproblem from an infinite collection of
subproblems is solved. we show that convergence rates for the expectation of
the squared error can be guaranteed under weaker conditions than previously
established in [Constr. Approx. 44:1 (2016), 121-139]. A connection to the
theory of learning algorithms in reproducing kernel Hilbert spaces is revealed.Comment: 15 page
- …