102,908 research outputs found

    Efficient Rank Reduction of Correlation Matrices

    Get PDF
    Geometric optimisation algorithms are developed that efficiently find the nearest low-rank correlation matrix. We show, in numerical tests, that our methods compare favourably to the existing methods in the literature. The connection with the Lagrange multiplier method is established, along with an identification of whether a local minimum is a global minimum. An additional benefit of the geometric approach is that any weighted norm can be applied. The problem of finding the nearest low-rank correlation matrix occurs as part of the calibration of multi-factor interest rate market models to correlation.Comment: First version: 20 pages, 4 figures Second version [changed content]: 21 pages, 6 figure

    Normal Factor Graphs and Holographic Transformations

    Full text link
    This paper stands at the intersection of two distinct lines of research. One line is "holographic algorithms," a powerful approach introduced by Valiant for solving various counting problems in computer science; the other is "normal factor graphs," an elegant framework proposed by Forney for representing codes defined on graphs. We introduce the notion of holographic transformations for normal factor graphs, and establish a very general theorem, called the generalized Holant theorem, which relates a normal factor graph to its holographic transformation. We show that the generalized Holant theorem on the one hand underlies the principle of holographic algorithms, and on the other hand reduces to a general duality theorem for normal factor graphs, a special case of which was first proved by Forney. In the course of our development, we formalize a new semantics for normal factor graphs, which highlights various linear algebraic properties that potentially enable the use of normal factor graphs as a linear algebraic tool.Comment: To appear IEEE Trans. Inform. Theor

    Approximate Hypergraph Coloring under Low-discrepancy and Related Promises

    Get PDF
    A hypergraph is said to be χ\chi-colorable if its vertices can be colored with χ\chi colors so that no hyperedge is monochromatic. 22-colorability is a fundamental property (called Property B) of hypergraphs and is extensively studied in combinatorics. Algorithmically, however, given a 22-colorable kk-uniform hypergraph, it is NP-hard to find a 22-coloring miscoloring fewer than a fraction 2k+12^{-k+1} of hyperedges (which is achieved by a random 22-coloring), and the best algorithms to color the hypergraph properly require n11/k\approx n^{1-1/k} colors, approaching the trivial bound of nn as kk increases. In this work, we study the complexity of approximate hypergraph coloring, for both the maximization (finding a 22-coloring with fewest miscolored edges) and minimization (finding a proper coloring using fewest number of colors) versions, when the input hypergraph is promised to have the following stronger properties than 22-colorability: (A) Low-discrepancy: If the hypergraph has discrepancy k\ell \ll \sqrt{k}, we give an algorithm to color the it with nO(2/k)\approx n^{O(\ell^2/k)} colors. However, for the maximization version, we prove NP-hardness of finding a 22-coloring miscoloring a smaller than 2O(k)2^{-O(k)} (resp. kO(k)k^{-O(k)}) fraction of the hyperedges when =O(logk)\ell = O(\log k) (resp. =2\ell=2). Assuming the UGC, we improve the latter hardness factor to 2O(k)2^{-O(k)} for almost discrepancy-11 hypergraphs. (B) Rainbow colorability: If the hypergraph has a (k)(k-\ell)-coloring such that each hyperedge is polychromatic with all these colors, we give a 22-coloring algorithm that miscolors at most kΩ(k)k^{-\Omega(k)} of the hyperedges when k\ell \ll \sqrt{k}, and complement this with a matching UG hardness result showing that when =k\ell =\sqrt{k}, it is hard to even beat the 2k+12^{-k+1} bound achieved by a random coloring.Comment: Approx 201

    A framework for adaptive Monte-Carlo procedures

    Get PDF
    Adaptive Monte Carlo methods are recent variance reduction techniques. In this work, we propose a mathematical setting which greatly relaxes the assumptions needed by for the adaptive importance sampling techniques presented by Vazquez-Abad and Dufresne, Fu and Su, and Arouna. We establish the convergence and asymptotic normality of the adaptive Monte Carlo estimator under local assumptions which are easily verifiable in practice. We present one way of approximating the optimal importance sampling parameter using a randomly truncated stochastic algorithm. Finally, we apply this technique to some examples of valuation of financial derivatives

    ASMs and Operational Algorithmic Completeness of Lambda Calculus

    Get PDF
    We show that lambda calculus is a computation model which can step by step simulate any sequential deterministic algorithm for any computable function over integers or words or any datatype. More formally, given an algorithm above a family of computable functions (taken as primitive tools, i.e., kind of oracle functions for the algorithm), for every constant K big enough, each computation step of the algorithm can be simulated by exactly K successive reductions in a natural extension of lambda calculus with constants for functions in the above considered family. The proof is based on a fixed point technique in lambda calculus and on Gurevich sequential Thesis which allows to identify sequential deterministic algorithms with Abstract State Machines. This extends to algorithms for partial computable functions in such a way that finite computations ending with exceptions are associated to finite reductions leading to terms with a particular very simple feature.Comment: 37 page

    Normalisation Control in Deep Inference via Atomic Flows

    Get PDF
    We introduce `atomic flows': they are graphs obtained from derivations by tracing atom occurrences and forgetting the logical structure. We study simple manipulations of atomic flows that correspond to complex reductions on derivations. This allows us to prove, for propositional logic, a new and very general normalisation theorem, which contains cut elimination as a special case. We operate in deep inference, which is more general than other syntactic paradigms, and where normalisation is more difficult to control. We argue that atomic flows are a significant technical advance for normalisation theory, because 1) the technique they support is largely independent of syntax; 2) indeed, it is largely independent of logical inference rules; 3) they constitute a powerful geometric formalism, which is more intuitive than syntax

    Polynomial fixed-parameter algorithms : a case study for longest path on interval graphs.

    Get PDF
    We study the design of fixed-parameter algorithms for problems already known to be solvable in polynomial time. The main motivation is to get more efficient algorithms for problems with unattractive polynomial running times. Here, we focus on a fundamental graph problem: Longest Path; it is NP-hard in general but known to be solvable in O(n^4) time on n-vertex interval graphs. We show how to solve Longest Path on Interval Graphs, parameterized by vertex deletion number k to proper interval graphs, in O(k^9n) time. Notably, Longest Path is trivially solvable in linear time on proper interval graphs, and the parameter value k can be approximated up to a factor of 4 in linear time. From a more general perspective, we believe that using parameterized complexity analysis for polynomial-time solvable problems offers a very fertile ground for future studies for all sorts of algorithmic problems. It may enable a refined understanding of efficiency aspects for polynomial-time solvable problems, similarly to what classical parameterized complexity analysis does for NP-hard problems

    Algorithms for group isomorphism via group extensions and cohomology

    Full text link
    The isomorphism problem for finite groups of order n (GpI) has long been known to be solvable in nlogn+O(1)n^{\log n+O(1)} time, but only recently were polynomial-time algorithms designed for several interesting group classes. Inspired by recent progress, we revisit the strategy for GpI via the extension theory of groups. The extension theory describes how a normal subgroup N is related to G/N via G, and this naturally leads to a divide-and-conquer strategy that splits GpI into two subproblems: one regarding group actions on other groups, and one regarding group cohomology. When the normal subgroup N is abelian, this strategy is well-known. Our first contribution is to extend this strategy to handle the case when N is not necessarily abelian. This allows us to provide a unified explanation of all recent polynomial-time algorithms for special group classes. Guided by this strategy, to make further progress on GpI, we consider central-radical groups, proposed in Babai et al. (SODA 2011): the class of groups such that G mod its center has no abelian normal subgroups. This class is a natural extension of the group class considered by Babai et al. (ICALP 2012), namely those groups with no abelian normal subgroups. Following the above strategy, we solve GpI in nO(loglogn)n^{O(\log \log n)} time for central-radical groups, and in polynomial time for several prominent subclasses of central-radical groups. We also solve GpI in nO(loglogn)n^{O(\log\log n)} time for groups whose solvable normal subgroups are elementary abelian but not necessarily central. As far as we are aware, this is the first time there have been worst-case guarantees on a no(logn)n^{o(\log n)}-time algorithm that tackles both aspects of GpI---actions and cohomology---simultaneously.Comment: 54 pages + 14-page appendix. Significantly improved presentation, with some new result
    corecore