7,400 research outputs found
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
Compressive sampling offers a new paradigm for acquiring signals that are
compressible with respect to an orthonormal basis. The major algorithmic
challenge in compressive sampling is to approximate a compressible signal from
noisy samples. This paper describes a new iterative recovery algorithm called
CoSaMP that delivers the same guarantees as the best optimization-based
approaches. Moreover, this algorithm offers rigorous bounds on computational
cost and storage. It is likely to be extremely efficient for practical problems
because it requires only matrix-vector multiplies with the sampling matrix. For
many cases of interest, the running time is just O(N*log^2(N)), where N is the
length of the signal.Comment: 30 pages. Revised. Presented at Information Theory and Applications,
31 January 2008, San Dieg
Eigenvector Synchronization, Graph Rigidity and the Molecule Problem
The graph realization problem has received a great deal of attention in
recent years, due to its importance in applications such as wireless sensor
networks and structural biology. In this paper, we extend on previous work and
propose the 3D-ASAP algorithm, for the graph realization problem in
, given a sparse and noisy set of distance measurements. 3D-ASAP
is a divide and conquer, non-incremental and non-iterative algorithm, which
integrates local distance information into a global structure determination.
Our approach starts with identifying, for every node, a subgraph of its 1-hop
neighborhood graph, which can be accurately embedded in its own coordinate
system. In the noise-free case, the computed coordinates of the sensors in each
patch must agree with their global positioning up to some unknown rigid motion,
that is, up to translation, rotation and possibly reflection. In other words,
to every patch there corresponds an element of the Euclidean group Euc(3) of
rigid transformations in , and the goal is to estimate the group
elements that will properly align all the patches in a globally consistent way.
Furthermore, 3D-ASAP successfully incorporates information specific to the
molecule problem in structural biology, in particular information on known
substructures and their orientation. In addition, we also propose 3D-SP-ASAP, a
faster version of 3D-ASAP, which uses a spectral partitioning algorithm as a
preprocessing step for dividing the initial graph into smaller subgraphs. Our
extensive numerical simulations show that 3D-ASAP and 3D-SP-ASAP are very
robust to high levels of noise in the measured distances and to sparse
connectivity in the measurement graph, and compare favorably to similar
state-of-the art localization algorithms.Comment: 49 pages, 8 figure
FALKON: An Optimal Large Scale Kernel Method
Kernel methods provide a principled way to perform non linear, nonparametric
learning. They rely on solid functional analytic foundations and enjoy optimal
statistical properties. However, at least in their basic form, they have
limited applicability in large scale scenarios because of stringent
computational requirements in terms of time and especially memory. In this
paper, we take a substantial step in scaling up kernel methods, proposing
FALKON, a novel algorithm that allows to efficiently process millions of
points. FALKON is derived combining several algorithmic principles, namely
stochastic subsampling, iterative solvers and preconditioning. Our theoretical
analysis shows that optimal statistical accuracy is achieved requiring
essentially memory and time. An extensive experimental
analysis on large scale datasets shows that, even with a single machine, FALKON
outperforms previous state of the art solutions, which exploit
parallel/distributed architectures.Comment: NIPS 201
A direct solver with O(N) complexity for variable coefficient elliptic PDEs discretized via a high-order composite spectral collocation method
A numerical method for solving elliptic PDEs with variable coefficients on
two-dimensional domains is presented. The method is based on high-order
composite spectral approximations and is designed for problems with smooth
solutions. The resulting system of linear equations is solved using a direct
(as opposed to iterative) solver that has optimal O(N) complexity for all
stages of the computation when applied to problems with non-oscillatory
solutions such as the Laplace and the Stokes equations. Numerical examples
demonstrate that the scheme is capable of computing solutions with relative
accuracy of or better, even for challenging problems such as highly
oscillatory Helmholtz problems and convection-dominated convection diffusion
equations. In terms of speed, it is demonstrated that a problem with a
non-oscillatory solution that was discretized using nodes was solved
in 115 minutes on a personal work-station with two quad-core 3.3GHz CPUs. Since
the solver is direct, and the "solution operator" fits in RAM, any solves
beyond the first are very fast. In the example with unknowns, solves
require only 30 seconds.Comment: arXiv admin note: text overlap with arXiv:1302.599
Dynamic Robust Transmission Expansion Planning
Recent breakthroughs in Transmission Network Expansion Planning (TNEP) have
demonstrated that the use of robust optimization, as opposed to stochastic
programming methods, renders the expansion planning problem considering
uncertainties computationally tractable for real systems. However, there is
still a yet unresolved and challenging problem as regards the resolution of the
dynamic TNEP problem (DTNEP), which considers the year-by-year representation
of uncertainties and investment decisions in an integrated way. This problem
has been considered to be a highly complex and computationally intractable
problem, and most research related to this topic focuses on very small case
studies or used heuristic methods and has lead most studies about TNEP in the
technical literature to take a wide spectrum of simplifying assumptions. In
this paper an adaptive robust transmission network expansion planning
formulation is proposed for keeping the full dynamic complexity of the problem.
The method overcomes the problem size limitations and computational
intractability associated with dynamic TNEP for realistic cases. Numerical
results from an illustrative example and the IEEE 118-bus system are presented
and discussed, demonstrating the benefits of this dynamic TNEP approach with
respect to classical methods.Comment: 10 pages, 2 figures. This article has been accepted for publication
in a future issue of this journal, but has not been fully edited. Content may
change prior to final publication. Citation information: DOI
10.1109/TPWRS.2016.2629266, IEEE Transactions on Power Systems 201
Let's Make Block Coordinate Descent Go Fast: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence
Block coordinate descent (BCD) methods are widely-used for large-scale
numerical optimization because of their cheap iteration costs, low memory
requirements, amenability to parallelization, and ability to exploit problem
structure. Three main algorithmic choices influence the performance of BCD
methods: the block partitioning strategy, the block selection rule, and the
block update rule. In this paper we explore all three of these building blocks
and propose variations for each that can lead to significantly faster BCD
methods. We (i) propose new greedy block-selection strategies that guarantee
more progress per iteration than the Gauss-Southwell rule; (ii) explore
practical issues like how to implement the new rules when using "variable"
blocks; (iii) explore the use of message-passing to compute matrix or Newton
updates efficiently on huge blocks for problems with a sparse dependency
between variables; and (iv) consider optimal active manifold identification,
which leads to bounds on the "active set complexity" of BCD methods and leads
to superlinear convergence for certain problems with sparse solutions (and in
some cases finite termination at an optimal solution). We support all of our
findings with numerical results for the classic machine learning problems of
least squares, logistic regression, multi-class logistic regression, label
propagation, and L1-regularization
- âŠ