61,912 research outputs found

    All Maximal Independent Sets and Dynamic Dominance for Sparse Graphs

    Full text link
    We describe algorithms, based on Avis and Fukuda's reverse search paradigm, for listing all maximal independent sets in a sparse graph in polynomial time and delay per output. For bounded degree graphs, our algorithms take constant time per set generated; for minor-closed graph families, the time is O(n) per set, and for more general sparse graph families we achieve subquadratic time per set. We also describe new data structures for maintaining a dynamic vertex set S in a sparse or minor-closed graph family, and querying the number of vertices not dominated by S; for minor-closed graph families the time per update is constant, while it is sublinear for any sparse graph family. We can also maintain a dynamic vertex set in an arbitrary m-edge graph and test the independence of the maintained set in time O(sqrt m) per update. We use the domination data structures as part of our enumeration algorithms.Comment: 10 page

    A general method for common intervals

    Full text link
    Given an elementary chain of vertex set V, seen as a labelling of V by the set {1, ...,n=|V|}, and another discrete structure over VV, say a graph G, the problem of common intervals is to compute the induced subgraphs G[I], such that II is an interval of [1, n] and G[I] satisfies some property Pi (as for example Pi= "being connected"). This kind of problems comes from comparative genomic in bioinformatics, mainly when the graph GG is a chain or a tree (Heber and Stoye 2001, Heber and Savage 2005, Bergeron et al 2008). When the family of intervals is closed under intersection, we present here the combination of two approaches, namely the idea of potential beginning developed in Uno, Yagiura 2000 and Bui-Xuan et al 2005 and the notion of generator as defined in Bergeron et al 2008. This yields a very simple generic algorithm to compute all common intervals, which gives optimal algorithms in various applications. For example in the case where GG is a tree, our framework yields the first linear time algorithms for the two properties: "being connected" and "being a path". In the case where GG is a chain, the problem is known as: common intervals of two permutations (Uno and Yagiura 2000), our algorithm provides not only the set of all common intervals but also with some easy modifications a tree structure that represents this set

    Arboricity, h-Index, and Dynamic Algorithms

    Get PDF
    In this paper we present a modification of a technique by Chiba and Nishizeki [Chiba and Nishizeki: Arboricity and Subgraph Listing Algorithms, SIAM J. Comput. 14(1), pp. 210--223 (1985)]. Based on it, we design a data structure suitable for dynamic graph algorithms. We employ the data structure to formulate new algorithms for several problems, including counting subgraphs of four vertices, recognition of diamond-free graphs, cop-win graphs and strongly chordal graphs, among others. We improve the time complexity for graphs with low arboricity or h-index.Comment: 19 pages, no figure

    Model-based Boosting in R: A Hands-on Tutorial Using the R Package mboost

    Get PDF
    We provide a detailed hands-on tutorial for the R add-on package mboost. The package implements boosting for optimizing general risk functions utilizing component-wise (penalized) least squares estimates as base-learners for fitting various kinds of generalized linear and generalized additive models to potentially high-dimensional data. We give a theoretical background and demonstrate how mboost can be used to fit interpretable models of different complexity. As an example we use mboost to predict the body fat based on anthropometric measurements throughout the tutorial

    ASMs and Operational Algorithmic Completeness of Lambda Calculus

    Get PDF
    We show that lambda calculus is a computation model which can step by step simulate any sequential deterministic algorithm for any computable function over integers or words or any datatype. More formally, given an algorithm above a family of computable functions (taken as primitive tools, i.e., kind of oracle functions for the algorithm), for every constant K big enough, each computation step of the algorithm can be simulated by exactly K successive reductions in a natural extension of lambda calculus with constants for functions in the above considered family. The proof is based on a fixed point technique in lambda calculus and on Gurevich sequential Thesis which allows to identify sequential deterministic algorithms with Abstract State Machines. This extends to algorithms for partial computable functions in such a way that finite computations ending with exceptions are associated to finite reductions leading to terms with a particular very simple feature.Comment: 37 page

    Linear Time LexDFS on Cocomparability Graphs

    Full text link
    Lexicographic depth first search (LexDFS) is a graph search protocol which has already proved to be a powerful tool on cocomparability graphs. Cocomparability graphs have been well studied by investigating their complements (comparability graphs) and their corresponding posets. Recently however LexDFS has led to a number of elegant polynomial and near linear time algorithms on cocomparability graphs when used as a preprocessing step [2, 3, 11]. The nonlinear runtime of some of these results is a consequence of complexity of this preprocessing step. We present the first linear time algorithm to compute a LexDFS cocomparability ordering, therefore answering a problem raised in [2] and helping achieve the first linear time algorithms for the minimum path cover problem, and thus the Hamilton path problem, the maximum independent set problem and the minimum clique cover for this graph family

    Minimizing sum of completion times on a single machine with sequence-dependent family setup times

    Get PDF
    This paper presents a branch-and-bound (B&B) algorithm for minimizing the sum of completion times in a singlemachine scheduling setting with sequence-dependent family setup times. The main feature of the B&B algorithm is a new lower bounding scheme that is based on a networkformulation of the problem. With extensive computational tests, we demonstrate that the B&B algorithm can solve problems with up to 60 jobs and 12 families, where setup and processing times are uniformly distributed in various combinations of the [1,50] and [1,100] ranges
    • 

    corecore