18,937 research outputs found
Faster Geometric Algorithms via Dynamic Determinant Computation
The computation of determinants or their signs is the core procedure in many
important geometric algorithms, such as convex hull, volume and point location.
As the dimension of the computation space grows, a higher percentage of the
total computation time is consumed by these computations. In this paper we
study the sequences of determinants that appear in geometric algorithms. The
computation of a single determinant is accelerated by using the information
from the previous computations in that sequence.
We propose two dynamic determinant algorithms with quadratic arithmetic
complexity when employed in convex hull and volume computations, and with
linear arithmetic complexity when used in point location problems. We implement
the proposed algorithms and perform an extensive experimental analysis. On one
hand, our analysis serves as a performance study of state-of-the-art
determinant algorithms and implementations. On the other hand, we demonstrate
the supremacy of our methods over state-of-the-art implementations of
determinant and geometric algorithms. Our experimental results include a 20 and
78 times speed-up in volume and point location computations in dimension 6 and
11 respectively.Comment: 29 pages, 8 figures, 3 table
Automatic differentiation in machine learning: a survey
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in
machine learning. Automatic differentiation (AD), also called algorithmic
differentiation or simply "autodiff", is a family of techniques similar to but
more general than backpropagation for efficiently and accurately evaluating
derivatives of numeric functions expressed as computer programs. AD is a small
but established field with applications in areas including computational fluid
dynamics, atmospheric sciences, and engineering design optimization. Until very
recently, the fields of machine learning and AD have largely been unaware of
each other and, in some cases, have independently discovered each other's
results. Despite its relevance, general-purpose AD has been missing from the
machine learning toolbox, a situation slowly changing with its ongoing adoption
under the names "dynamic computational graphs" and "differentiable
programming". We survey the intersection of AD and machine learning, cover
applications where AD has direct relevance, and address the main implementation
techniques. By precisely defining the main differentiation techniques and their
interrelationships, we aim to bring clarity to the usage of the terms
"autodiff", "automatic differentiation", and "symbolic differentiation" as
these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure
Engineering Art Galleries
The Art Gallery Problem is one of the most well-known problems in
Computational Geometry, with a rich history in the study of algorithms,
complexity, and variants. Recently there has been a surge in experimental work
on the problem. In this survey, we describe this work, show the chronology of
developments, and compare current algorithms, including two unpublished
versions, in an exhaustive experiment. Furthermore, we show what core
algorithmic ingredients have led to recent successes
Computing Real Roots of Real Polynomials ... and now For Real!
Very recent work introduces an asymptotically fast subdivision algorithm,
denoted ANewDsc, for isolating the real roots of a univariate real polynomial.
The method combines Descartes' Rule of Signs to test intervals for the
existence of roots, Newton iteration to speed up convergence against clusters
of roots, and approximate computation to decrease the required precision. It
achieves record bounds on the worst-case complexity for the considered problem,
matching the complexity of Pan's method for computing all complex roots and
improving upon the complexity of other subdivision methods by several
magnitudes.
In the article at hand, we report on an implementation of ANewDsc on top of
the RS root isolator. RS is a highly efficient realization of the classical
Descartes method and currently serves as the default real root solver in Maple.
We describe crucial design changes within ANewDsc and RS that led to a
high-performance implementation without harming the theoretical complexity of
the underlying algorithm.
With an excerpt of our extensive collection of benchmarks, available online
at http://anewdsc.mpi-inf.mpg.de/, we illustrate that the theoretical gain in
performance of ANewDsc over other subdivision methods also transfers into
practice. These experiments also show that our new implementation outperforms
both RS and mature competitors by magnitudes for notoriously hard instances
with clustered roots. For all other instances, we avoid almost any overhead by
integrating additional optimizations and heuristics.Comment: Accepted for presentation at the 41st International Symposium on
Symbolic and Algebraic Computation (ISSAC), July 19--22, 2016, Waterloo,
Ontario, Canad
A framework for deflated and augmented Krylov subspace methods
We consider deflation and augmentation techniques for accelerating the
convergence of Krylov subspace methods for the solution of nonsingular linear
algebraic systems. Despite some formal similarity, the two techniques are
conceptually different from preconditioning. Deflation (in the sense the term
is used here) "removes" certain parts from the operator making it singular,
while augmentation adds a subspace to the Krylov subspace (often the one that
is generated by the singular operator); in contrast, preconditioning changes
the spectrum of the operator without making it singular. Deflation and
augmentation have been used in a variety of methods and settings. Typically,
deflation is combined with augmentation to compensate for the singularity of
the operator, but both techniques can be applied separately.
We introduce a framework of Krylov subspace methods that satisfy a Galerkin
condition. It includes the families of orthogonal residual (OR) and minimal
residual (MR) methods. We show that in this framework augmentation can be
achieved either explicitly or, equivalently, implicitly by projecting the
residuals appropriately and correcting the approximate solutions in a final
step. We study conditions for a breakdown of the deflated methods, and we show
several possibilities to avoid such breakdowns for the deflated MINRES method.
Numerical experiments illustrate properties of different variants of deflated
MINRES analyzed in this paper.Comment: 24 pages, 3 figure
- …