128,696 research outputs found
Data-Driven Time-Frequency Analysis
In this paper, we introduce a new adaptive data analysis method to study
trend and instantaneous frequency of nonlinear and non-stationary data. This
method is inspired by the Empirical Mode Decomposition method (EMD) and the
recently developed compressed (compressive) sensing theory. The main idea is to
look for the sparsest representation of multiscale data within the largest
possible dictionary consisting of intrinsic mode functions of the form , where , consists of the
functions smoother than and . This problem can
be formulated as a nonlinear optimization problem. In order to solve this
optimization problem, we propose a nonlinear matching pursuit method by
generalizing the classical matching pursuit for the optimization problem.
One important advantage of this nonlinear matching pursuit method is it can be
implemented very efficiently and is very stable to noise. Further, we provide a
convergence analysis of our nonlinear matching pursuit method under certain
scale separation assumptions. Extensive numerical examples will be given to
demonstrate the robustness of our method and comparison will be made with the
EMD/EEMD method. We also apply our method to study data without scale
separation, data with intra-wave frequency modulation, and data with incomplete
or under-sampled data
Approximating the ground state of fermion system by multiple determinant states: matching pursuit approach
We present a simple and stable numerical method to approximate the ground
state of a quantum many-body system by multiple determinant states. This method
searches these determinant states one by one according to the matching pursuit
algorithm. The first determinant state is identical to that of the Hartree-Fock
theory. Calculations for two-dimensional Hubbard model serve as a
demonstration.Comment: 5 Pages, 1 figur
Greed is good: algorithmic results for sparse approximation
This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms
Oracle-order Recovery Performance of Greedy Pursuits with Replacement against General Perturbations
Applying the theory of compressive sensing in practice always takes different
kinds of perturbations into consideration. In this paper, the recovery
performance of greedy pursuits with replacement for sparse recovery is analyzed
when both the measurement vector and the sensing matrix are contaminated with
additive perturbations. Specifically, greedy pursuits with replacement include
three algorithms, compressive sampling matching pursuit (CoSaMP), subspace
pursuit (SP), and iterative hard thresholding (IHT), where the support
estimation is evaluated and updated in each iteration. Based on restricted
isometry property, a unified form of the error bounds of these recovery
algorithms is derived under general perturbations for compressible signals. The
results reveal that the recovery performance is stable against both
perturbations. In addition, these bounds are compared with that of oracle
recovery--- least squares solution with the locations of some largest entries
in magnitude known a priori. The comparison shows that the error bounds of
these algorithms only differ in coefficients from the lower bound of oracle
recovery for some certain signal and perturbations, as reveals that
oracle-order recovery performance of greedy pursuits with replacement is
guaranteed. Numerical simulations are performed to verify the conclusions.Comment: 27 pages, 4 figures, 5 table
- …