6,066 research outputs found

    Interleaving schemes for multidimensional cluster errors

    Get PDF
    We present two-dimensional and three-dimensional interleaving techniques for correcting two- and three-dimensional bursts (or clusters) of errors, where a cluster of errors is characterized by its area or volume. Correction of multidimensional error clusters is required in holographic storage, an emerging application of considerable importance. Our main contribution is the construction of efficient two-dimensional and three-dimensional interleaving schemes. The proposed schemes are based on t-interleaved arrays of integers, defined by the property that every connected component of area or volume t consists of distinct integers. In the two-dimensional case, our constructions are optimal: they have the lowest possible interleaving degree. That is, the resulting t-interleaved arrays contain the smallest possible number of distinct integers, hence minimizing the number of codewords required in an interleaving scheme. In general, we observe that the interleaving problem can be interpreted as a graph-coloring problem, and introduce the useful special class of lattice interleavers. We employ a result of Minkowski, dating back to 1904, to establish both upper and lower bounds on the interleaving degree of lattice interleavers in three dimensions. For the case t≡0 mod 6, the upper and lower bounds coincide, and the Minkowski lattice directly yields an optimal lattice interleaver. For t≠0 mod 6, we construct efficient lattice interleavers using approximations of the Minkowski lattice

    Value constraints in the CLP scheme

    Get PDF
    This paper addresses the question of how to incorporate constraint propagation into logic programming. A likely candidate is the CLP scheme, which allows one to exploit algorithmic opportunities while staying within logic programming semantics. CLP(calRcal R) is an example: it combines logic programming with the algorithms for solving linear equalities and inequalities. In this paper we describe two contrasting constraint store management strategies within the CLP scheme. One leads to CLP(calRcal R), while the other, which we call value constraints, supports consistency methods such as arc consistency and interval constraints. In value constraints, the infer step of the CLP scheme is the application of a consistency operator acting on the active constraints. We show how the continued application of the infer step can be optimized and that such optimization is equivalent to the Waltz algorithm for constraint propagation. Using the Lassez-Maher fixpoint theory of chaotic iterations, we show that the Waltz algorithm does not necessarily converge to a fixpoint, but that it does so in the finitary case

    On the limiting distribution of the metric dimension for random forests

    Get PDF
    The metric dimension of a graph G is the minimum size of a subset S of vertices of G such that all other vertices are uniquely determined by their distances to the vertices in S. In this paper we investigate the metric dimension for two different models of random forests, in each case obtaining normal limit distributions for this parameter.Comment: 22 pages, 5 figure

    Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations

    Full text link
    It is well-known that any sum of squares (SOS) program can be cast as a semidefinite program (SDP) of a particular structure and that therein lies the computational bottleneck for SOS programs, as the SDPs generated by this procedure are large and costly to solve when the polynomials involved in the SOS programs have a large number of variables and degree. In this paper, we review SOS optimization techniques and present two new methods for improving their computational efficiency. The first method leverages the sparsity of the underlying SDP to obtain computational speed-ups. Further improvements can be obtained if the coefficients of the polynomials that describe the problem have a particular sparsity pattern, called chordal sparsity. The second method bypasses semidefinite programming altogether and relies instead on solving a sequence of more tractable convex programs, namely linear and second order cone programs. This opens up the question as to how well one can approximate the cone of SOS polynomials by second order representable cones. In the last part of the paper, we present some recent negative results related to this question.Comment: Tutorial for CDC 201

    A Novel SAT-Based Approach to the Task Graph Cost-Optimal Scheduling Problem

    Get PDF
    The Task Graph Cost-Optimal Scheduling Problem consists in scheduling a certain number of interdependent tasks onto a set of heterogeneous processors (characterized by idle and running rates per time unit), minimizing the cost of the entire process. This paper provides a novel formulation for this scheduling puzzle, in which an optimal solution is computed through a sequence of Binate Covering Problems, hinged within a Bounded Model Checking paradigm. In this approach, each covering instance, providing a min-cost trace for a given schedule depth, can be solved with several strategies, resorting to Minimum-Cost Satisfiability solvers or Pseudo-Boolean Optimization tools. Unfortunately, all direct resolution methods show very low efficiency and scalability. As a consequence, we introduce a specialized method to solve the same sequence of problems, based on a traditional all-solution SAT solver. This approach follows the "circuit cofactoring" strategy, as it exploits a powerful technique to capture a large set of solutions for any new SAT counter-example. The overall method is completed with a branch-and-bound heuristic which evaluates lower and upper bounds of the schedule length, to reduce the state space that has to be visited. Our results show that the proposed strategy significantly improves the blind binate covering schema, and it outperforms general purpose state-of-the-art tool

    The achievable performance of convex demixing

    Get PDF
    Demixing is the problem of identifying multiple structured signals from a superimposed, undersampled, and noisy observation. This work analyzes a general framework, based on convex optimization, for solving demixing problems. When the constituent signals follow a generic incoherence model, this analysis leads to precise recovery guarantees. These results admit an attractive interpretation: each signal possesses an intrinsic degrees-of-freedom parameter, and demixing can succeed if and only if the dimension of the observation exceeds the total degrees of freedom present in the observation

    Limitations on Quantum Key Repeaters

    Full text link
    A major application of quantum communication is the distribution of entangled particles for use in quantum key distribution (QKD). Due to noise in the communication line, QKD is in practice limited to a distance of a few hundred kilometres, and can only be extended to longer distances by use of a quantum repeater, a device which performs entanglement distillation and quantum teleportation. The existence of noisy entangled states that are undistillable but nevertheless useful for QKD raises the question of the feasibility of a quantum key repeater, which would work beyond the limits of entanglement distillation, hence possibly tolerating higher noise levels than existing protocols. Here we exhibit fundamental limits on such a device in the form of bounds on the rate at which it may extract secure key. As a consequence, we give examples of states suitable for QKD but unsuitable for the most general quantum key repeater protocol.Comment: 11+38 pages, 4 figures, Statements for exact p-bits weakened as non-locking bound on measured relative entropy distance contained an erro

    Symmetry in Graph Theory

    Get PDF
    This book contains the successful invited submissions to a Special Issue of Symmetry on the subject of ""Graph Theory"". Although symmetry has always played an important role in Graph Theory, in recent years, this role has increased significantly in several branches of this field, including but not limited to Gromov hyperbolic graphs, the metric dimension of graphs, domination theory, and topological indices. This Special Issue includes contributions addressing new results on these topics, both from a theoretical and an applied point of view

    On base sizes for primitive groups of product type

    Get PDF
    corecore