3,879 research outputs found
Smoothed Complexity Theory
Smoothed analysis is a new way of analyzing algorithms introduced by Spielman
and Teng (J. ACM, 2004). Classical methods like worst-case or average-case
analysis have accompanying complexity classes, like P and AvgP, respectively.
While worst-case or average-case analysis give us a means to talk about the
running time of a particular algorithm, complexity classes allows us to talk
about the inherent difficulty of problems.
Smoothed analysis is a hybrid of worst-case and average-case analysis and
compensates some of their drawbacks. Despite its success for the analysis of
single algorithms and problems, there is no embedding of smoothed analysis into
computational complexity theory, which is necessary to classify problems
according to their intrinsic difficulty.
We propose a framework for smoothed complexity theory, define the relevant
classes, and prove some first hardness results (of bounded halting and tiling)
and tractability results (binary optimization problems, graph coloring,
satisfiability). Furthermore, we discuss extensions and shortcomings of our
model and relate it to semi-random models.Comment: to be presented at MFCS 201
Application of Operator Splitting Methods in Finance
Financial derivatives pricing aims to find the fair value of a financial
contract on an underlying asset. Here we consider option pricing in the partial
differential equations framework. The contemporary models lead to
one-dimensional or multidimensional parabolic problems of the
convection-diffusion type and generalizations thereof. An overview of various
operator splitting methods is presented for the efficient numerical solution of
these problems.
Splitting schemes of the Alternating Direction Implicit (ADI) type are
discussed for multidimensional problems, e.g. given by stochastic volatility
(SV) models. For jump models Implicit-Explicit (IMEX) methods are considered
which efficiently treat the nonlocal jump operator. For American options an
easy-to-implement operator splitting method is described for the resulting
linear complementarity problems.
Numerical experiments are presented to illustrate the actual stability and
convergence of the splitting schemes. Here European and American put options
are considered under four asset price models: the classical Black-Scholes
model, the Merton jump-diffusion model, the Heston SV model, and the Bates SV
model with jumps
Least Squares Ranking on Graphs
Given a set of alternatives to be ranked, and some pairwise comparison data,
ranking is a least squares computation on a graph. The vertices are the
alternatives, and the edge values comprise the comparison data. The basic idea
is very simple and old: come up with values on vertices such that their
differences match the given edge data. Since an exact match will usually be
impossible, one settles for matching in a least squares sense. This formulation
was first described by Leake in 1976 for rankingfootball teams and appears as
an example in Professor Gilbert Strang's classic linear algebra textbook. If
one is willing to look into the residual a little further, then the problem
really comes alive, as shown effectively by the remarkable recent paper of
Jiang et al. With or without this twist, the humble least squares problem on
graphs has far-reaching connections with many current areas ofresearch. These
connections are to theoretical computer science (spectral graph theory, and
multilevel methods for graph Laplacian systems); numerical analysis (algebraic
multigrid, and finite element exterior calculus); other mathematics (Hodge
decomposition, and random clique complexes); and applications (arbitrage, and
ranking of sports teams). Not all of these connections are explored in this
paper, but many are. The underlying ideas are easy to explain, requiring only
the four fundamental subspaces from elementary linear algebra. One of our aims
is to explain these basic ideas and connections, to get researchers in many
fields interested in this topic. Another aim is to use our numerical
experiments for guidance on selecting methods and exposing the need for further
development.Comment: Added missing references, comparison of linear solvers overhauled,
conclusion section added, some new figures adde
Eccentricity evolution of giant planet orbits due to circumstellar disk torques
The extrasolar planets discovered to date possess unexpected orbital
elements. Most orbit their host stars with larger eccentricities and smaller
semi-major axes than similarly sized planets in our own solar system do. It is
generally agreed that the interaction between giant planets and circumstellar
disks (Type II migration) drives these planets inward to small radii, but the
effect of these same disks on orbital eccentricity, e, is controversial.
Several recent analytic calculations suggest that disk-planet interactions can
excite eccentricity, while numerical studies generally produce eccentricity
damping. This paper addresses this controversy using a quasi-analytic approach,
drawing on several preceding analytic studies. This work refines the current
treatment of eccentricity evolution by removing several approximations from the
calculation of disk torques. We encounter neither uniform damping nor uniform
excitation of orbital eccentricity, but rather a function de/dt that varies in
both sign and magnitude depending on eccentricity and other solar system
properties. Most significantly, we find that for every combination of disk and
planet properties investigated herein, corotation torques produce negative
values of de/dt for some range in e within the interval [0.1, 0.5]. If
corotation torques are saturated, this region of eccentricity damping
disappears, and excitation occurs on a short timescale of less than 0.08 Myr.
Thus, our study does not produce eccentricity excitation on a timescale of a
few Myr -- we obtain either eccentricity excitation on a short time scale, or
eccentricity damping on a longer time scale. Finally, we discuss the
implications of this result for producing the observed range in extrasolar
planet eccentricity.Comment: 24 pages including 13 figures; accepted to ICARU
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
- …