102,908 research outputs found
Efficient Rank Reduction of Correlation Matrices
Geometric optimisation algorithms are developed that efficiently find the
nearest low-rank correlation matrix. We show, in numerical tests, that our
methods compare favourably to the existing methods in the literature. The
connection with the Lagrange multiplier method is established, along with an
identification of whether a local minimum is a global minimum. An additional
benefit of the geometric approach is that any weighted norm can be applied. The
problem of finding the nearest low-rank correlation matrix occurs as part of
the calibration of multi-factor interest rate market models to correlation.Comment: First version: 20 pages, 4 figures Second version [changed content]:
21 pages, 6 figure
Normal Factor Graphs and Holographic Transformations
This paper stands at the intersection of two distinct lines of research. One
line is "holographic algorithms," a powerful approach introduced by Valiant for
solving various counting problems in computer science; the other is "normal
factor graphs," an elegant framework proposed by Forney for representing codes
defined on graphs. We introduce the notion of holographic transformations for
normal factor graphs, and establish a very general theorem, called the
generalized Holant theorem, which relates a normal factor graph to its
holographic transformation. We show that the generalized Holant theorem on the
one hand underlies the principle of holographic algorithms, and on the other
hand reduces to a general duality theorem for normal factor graphs, a special
case of which was first proved by Forney. In the course of our development, we
formalize a new semantics for normal factor graphs, which highlights various
linear algebraic properties that potentially enable the use of normal factor
graphs as a linear algebraic tool.Comment: To appear IEEE Trans. Inform. Theor
Approximate Hypergraph Coloring under Low-discrepancy and Related Promises
A hypergraph is said to be -colorable if its vertices can be colored
with colors so that no hyperedge is monochromatic. -colorability is a
fundamental property (called Property B) of hypergraphs and is extensively
studied in combinatorics. Algorithmically, however, given a -colorable
-uniform hypergraph, it is NP-hard to find a -coloring miscoloring fewer
than a fraction of hyperedges (which is achieved by a random
-coloring), and the best algorithms to color the hypergraph properly require
colors, approaching the trivial bound of as
increases.
In this work, we study the complexity of approximate hypergraph coloring, for
both the maximization (finding a -coloring with fewest miscolored edges) and
minimization (finding a proper coloring using fewest number of colors)
versions, when the input hypergraph is promised to have the following stronger
properties than -colorability:
(A) Low-discrepancy: If the hypergraph has discrepancy ,
we give an algorithm to color the it with colors.
However, for the maximization version, we prove NP-hardness of finding a
-coloring miscoloring a smaller than (resp. )
fraction of the hyperedges when (resp. ). Assuming
the UGC, we improve the latter hardness factor to for almost
discrepancy- hypergraphs.
(B) Rainbow colorability: If the hypergraph has a -coloring such
that each hyperedge is polychromatic with all these colors, we give a
-coloring algorithm that miscolors at most of the
hyperedges when , and complement this with a matching UG
hardness result showing that when , it is hard to even beat the
bound achieved by a random coloring.Comment: Approx 201
A framework for adaptive Monte-Carlo procedures
Adaptive Monte Carlo methods are recent variance reduction techniques. In
this work, we propose a mathematical setting which greatly relaxes the
assumptions needed by for the adaptive importance sampling techniques presented
by Vazquez-Abad and Dufresne, Fu and Su, and Arouna. We establish the
convergence and asymptotic normality of the adaptive Monte Carlo estimator
under local assumptions which are easily verifiable in practice. We present one
way of approximating the optimal importance sampling parameter using a randomly
truncated stochastic algorithm. Finally, we apply this technique to some
examples of valuation of financial derivatives
ASMs and Operational Algorithmic Completeness of Lambda Calculus
We show that lambda calculus is a computation model which can step by step
simulate any sequential deterministic algorithm for any computable function
over integers or words or any datatype. More formally, given an algorithm above
a family of computable functions (taken as primitive tools, i.e., kind of
oracle functions for the algorithm), for every constant K big enough, each
computation step of the algorithm can be simulated by exactly K successive
reductions in a natural extension of lambda calculus with constants for
functions in the above considered family. The proof is based on a fixed point
technique in lambda calculus and on Gurevich sequential Thesis which allows to
identify sequential deterministic algorithms with Abstract State Machines. This
extends to algorithms for partial computable functions in such a way that
finite computations ending with exceptions are associated to finite reductions
leading to terms with a particular very simple feature.Comment: 37 page
Normalisation Control in Deep Inference via Atomic Flows
We introduce `atomic flows': they are graphs obtained from derivations by
tracing atom occurrences and forgetting the logical structure. We study simple
manipulations of atomic flows that correspond to complex reductions on
derivations. This allows us to prove, for propositional logic, a new and very
general normalisation theorem, which contains cut elimination as a special
case. We operate in deep inference, which is more general than other syntactic
paradigms, and where normalisation is more difficult to control. We argue that
atomic flows are a significant technical advance for normalisation theory,
because 1) the technique they support is largely independent of syntax; 2)
indeed, it is largely independent of logical inference rules; 3) they
constitute a powerful geometric formalism, which is more intuitive than syntax
Polynomial fixed-parameter algorithms : a case study for longest path on interval graphs.
We study the design of fixed-parameter algorithms for problems already known to be solvable in polynomial time.
The main motivation is to get more efficient algorithms for problems with unattractive polynomial running times. Here, we focus on a fundamental graph problem: Longest Path; it is NP-hard in general but known to be solvable in O(n^4) time on n-vertex interval graphs. We show how to solve Longest Path on Interval Graphs, parameterized by vertex deletion number k to proper interval graphs, in O(k^9n) time. Notably, Longest Path is trivially solvable in linear time on proper interval graphs, and the parameter value k can be approximated up to a factor of 4 in linear time. From a more general perspective, we believe that using parameterized complexity analysis for polynomial-time solvable problems offers a very fertile ground for future studies for all sorts of algorithmic problems. It may enable a refined understanding of efficiency aspects for polynomial-time solvable problems, similarly to what classical parameterized complexity analysis does for NP-hard problems
Algorithms for group isomorphism via group extensions and cohomology
The isomorphism problem for finite groups of order n (GpI) has long been
known to be solvable in time, but only recently were
polynomial-time algorithms designed for several interesting group classes.
Inspired by recent progress, we revisit the strategy for GpI via the extension
theory of groups.
The extension theory describes how a normal subgroup N is related to G/N via
G, and this naturally leads to a divide-and-conquer strategy that splits GpI
into two subproblems: one regarding group actions on other groups, and one
regarding group cohomology. When the normal subgroup N is abelian, this
strategy is well-known. Our first contribution is to extend this strategy to
handle the case when N is not necessarily abelian. This allows us to provide a
unified explanation of all recent polynomial-time algorithms for special group
classes.
Guided by this strategy, to make further progress on GpI, we consider
central-radical groups, proposed in Babai et al. (SODA 2011): the class of
groups such that G mod its center has no abelian normal subgroups. This class
is a natural extension of the group class considered by Babai et al. (ICALP
2012), namely those groups with no abelian normal subgroups. Following the
above strategy, we solve GpI in time for central-radical
groups, and in polynomial time for several prominent subclasses of
central-radical groups. We also solve GpI in time for
groups whose solvable normal subgroups are elementary abelian but not
necessarily central. As far as we are aware, this is the first time there have
been worst-case guarantees on a -time algorithm that tackles
both aspects of GpI---actions and cohomology---simultaneously.Comment: 54 pages + 14-page appendix. Significantly improved presentation,
with some new result
- …