63,954 research outputs found
Self-Dictionary Sparse Regression for Hyperspectral Unmixing: Greedy Pursuit and Pure Pixel Search are Related
This paper considers a recently emerged hyperspectral unmixing formulation
based on sparse regression of a self-dictionary multiple measurement vector
(SD-MMV) model, wherein the measured hyperspectral pixels are used as the
dictionary. Operating under the pure pixel assumption, this SD-MMV formalism is
special in that it allows simultaneous identification of the endmember spectral
signatures and the number of endmembers. Previous SD-MMV studies mainly focus
on convex relaxations. In this study, we explore the alternative of greedy
pursuit, which generally provides efficient and simple algorithms. In
particular, we design a greedy SD-MMV algorithm using simultaneous orthogonal
matching pursuit. Intriguingly, the proposed greedy algorithm is shown to be
closely related to some existing pure pixel search algorithms, especially, the
successive projection algorithm (SPA). Thus, a link between SD-MMV and pure
pixel search is revealed. We then perform exact recovery analyses, and prove
that the proposed greedy algorithm is robust to noise---including its
identification of the (unknown) number of endmembers---under a sufficiently low
noise level. The identification performance of the proposed greedy algorithm is
demonstrated through both synthetic and real-data experiments
A Computationally Efficient Discrete Bit-Loading Algorithm for OFDM Systems Subject to Spectral-Compatibility Limits
International audienceThis paper considers bit-loading algorithms to maximize throughput under total power and spectral mask constraints in interference-free OFDM systems. The contribution is twofold. First, we propose a simple criterion to switch between two wellknown algorithms from the literature: the conventional Greedy and Greedy-based bit-removing (with maximum allowable bit loading initialization) algorithms. Second, we present a new lowcomplexity loading algorithm that exploits the bit vector obtained by rounding the water-filling algorithm solution to the associated continuous-input rate maximization problem as an efficient initial bit vector of the Greedy algorithm.We theoretically prove that this bit vector has two interesting properties. The first one states that it is an efficient bit vector, i.e., there is no movement of a bit from one subcarrier to another that reduces the total used power. The second one states that the optimized throughput, starting from this initial bit vector, is achieved by adding or removing bits on each subcarrier at most once. Simulation results show the efficiency of the proposed algorithm, i.e., the achievable throughput is maximized with significant reduction of computation cost as compared to many algorithms in the literature
Oracle-order Recovery Performance of Greedy Pursuits with Replacement against General Perturbations
Applying the theory of compressive sensing in practice always takes different
kinds of perturbations into consideration. In this paper, the recovery
performance of greedy pursuits with replacement for sparse recovery is analyzed
when both the measurement vector and the sensing matrix are contaminated with
additive perturbations. Specifically, greedy pursuits with replacement include
three algorithms, compressive sampling matching pursuit (CoSaMP), subspace
pursuit (SP), and iterative hard thresholding (IHT), where the support
estimation is evaluated and updated in each iteration. Based on restricted
isometry property, a unified form of the error bounds of these recovery
algorithms is derived under general perturbations for compressible signals. The
results reveal that the recovery performance is stable against both
perturbations. In addition, these bounds are compared with that of oracle
recovery--- least squares solution with the locations of some largest entries
in magnitude known a priori. The comparison shows that the error bounds of
these algorithms only differ in coefficients from the lower bound of oracle
recovery for some certain signal and perturbations, as reveals that
oracle-order recovery performance of greedy pursuits with replacement is
guaranteed. Numerical simulations are performed to verify the conclusions.Comment: 27 pages, 4 figures, 5 table
Optimal Sparse Decision Trees
Decision tree algorithms have been among the most popular algorithms for
interpretable (transparent) machine learning since the early 1980's. The
problem that has plagued decision tree algorithms since their inception is
their lack of optimality, or lack of guarantees of closeness to optimality:
decision tree algorithms are often greedy or myopic, and sometimes produce
unquestionably suboptimal models. Hardness of decision tree optimization is
both a theoretical and practical obstacle, and even careful mathematical
programming approaches have not been able to solve these problems efficiently.
This work introduces the first practical algorithm for optimal decision trees
for binary variables. The algorithm is a co-design of analytical bounds that
reduce the search space and modern systems techniques, including data
structures and a custom bit-vector library. Our experiments highlight
advantages in scalability, speed, and proof of optimality.Comment: 33rd Conference on Neural Information Processing Systems (NeurIPS
2019), Vancouver, Canad
Performance Comparisons of Greedy Algorithms in Compressed Sensing
Compressed sensing has motivated the development of numerous sparse approximation algorithms designed to return a solution to an underdetermined system of linear equations where the solution has the fewest number of nonzeros possible, referred to as the sparsest solution. In the compressed sensing setting, greedy sparse approximation algorithms have been observed to be both able to recovery the sparsest solution for similar problem sizes as other algorithms and to be computationally efficient; however, little theory is known for their average case behavior. We conduct a large scale empirical investigation into the behavior of three of the state of the art greedy algorithms: NIHT, HTP, and CSMPSP. The investigation considers a variety of random classes of linear systems. The regions of the problem size in which each algorithm is able to reliably recovery the sparsest solution is accurately determined, and throughout this region additional performance characteristics are presented. Contrasting the recovery regions and average computational time for each algorithm we present algorithm selection maps which indicate, for each problem size, which algorithm is able to reliably recovery the sparsest vector in the least amount of time. Though no one algorithm is observed to be uniformly superior, NIHT is observed to have an advantageous balance of large recovery region, absolute recovery time, and robustness of these properties to additive noise and for a variety of problem classes. The algorithm selection maps presented here are the first of their kind for compressed sensing
- …