618 research outputs found

    A Fast Algorithm Finding the Shortest Reset Words

    Full text link
    In this paper we present a new fast algorithm finding minimal reset words for finite synchronizing automata. The problem is know to be computationally hard, and our algorithm is exponential. Yet, it is faster than the algorithms used so far and it works well in practice. The main idea is to use a bidirectional BFS and radix (Patricia) tries to store and compare resulted subsets. We give both theoretical and practical arguments showing that the branching factor is reduced efficiently. As a practical test we perform an experimental study of the length of the shortest reset word for random automata with nn states and 2 input letters. We follow Skvorsov and Tipikin, who have performed such a study using a SAT solver and considering automata up to n=100n=100 states. With our algorithm we are able to consider much larger sample of automata with up to n=300n=300 states. In particular, we obtain a new more precise estimation of the expected length of the shortest reset word 2.5n5\approx 2.5\sqrt{n-5}.Comment: COCOON 2013. The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-642-38768-5_1

    Investigations on push-relabel based algorithms for the maximum transversal problem

    Get PDF
    We investigate the push-relabel algorithm for solving the problem of finding a maximum cardinality matching in a bipartite graph in the context of the maximum transversal problem. We describe in detail an optimized yet easy-to-implement version of the algorithm and fine-tune its parameters. We also introduce new performance-enhancing techniques. On a wide range of real-world instances, we compare the push-relabel algorithm with state-of-the-art augmenting path-based algorithms and the recently proposed pseudoflow approach. We conclude that a carefully tuned push-relabel algorithm is competitive with all known augmenting path-based algorithms, and superior to the pseudoflow-based ones.Nous étudions le problème de couplage maximum dans des graphes bipartis. Nous décrivons en détail une version optimisée de l'algorithme en ajustant ses paramètres. L'algorithme est facile à mettre en œuvre. Nous introduisons également de nouvelles techniques pour améliorer la performance de l'algorithme. Sur un large éventail de cas du monde réel, nous comparons l'algorithme Push-Relabel avec des algorithmes basés sur les concepts de chemins augmentants et de pseudoflot récemment proposés. Nous concluons qu'un algorithme de type Push-Relabel soigneusement réglé est en concurrence avec tous les algorithmes connus de type chemins augmentants, et supérieur à ceux de type pseudoflot

    Space Efficient Algorithms for Breadth-Depth Search

    Full text link
    Continuing the recent trend, in this article we design several space-efficient algorithms for two well-known graph search methods. Both these search methods share the same name {\it breadth-depth search} (henceforth {\sf BDS}), although they work entirely in different fashion. The classical implementation for these graph search methods takes O(m+n)O(m+n) time and O(nlgn)O(n \lg n) bits of space in the standard word RAM model (with word size being Θ(lgn)\Theta(\lg n) bits), where mm and nn denotes the number of edges and vertices of the input graph respectively. Our goal here is to beat the space bound of the classical implementations, and design o(nlgn)o(n \lg n) space algorithms for these search methods by paying little to no penalty in the running time. Note that our space bounds (i.e., with o(nlgn)o(n \lg n) bits of space) do not even allow us to explicitly store the required information to implement the classical algorithms, yet our algorithms visits and reports all the vertices of the input graph in correct order.Comment: 12 pages, This work will appear in FCT 201

    Distance-generalized Core Decomposition

    Full text link
    The kk-core of a graph is defined as the maximal subgraph in which every vertex is connected to at least kk other vertices within that subgraph. In this work we introduce a distance-based generalization of the notion of kk-core, which we refer to as the (k,h)(k,h)-core, i.e., the maximal subgraph in which every vertex has at least kk other vertices at distance h\leq h within that subgraph. We study the properties of the (k,h)(k,h)-core showing that it preserves many of the nice features of the classic core decomposition (e.g., its connection with the notion of distance-generalized chromatic number) and it preserves its usefulness to speed-up or approximate distance-generalized notions of dense structures, such as hh-club. Computing the distance-generalized core decomposition over large networks is intrinsically complex. However, by exploiting clever upper and lower bounds we can partition the computation in a set of totally independent subcomputations, opening the door to top-down exploration and to multithreading, and thus achieving an efficient algorithm

    Succinct Data Structures for Families of Interval Graphs

    Full text link
    We consider the problem of designing succinct data structures for interval graphs with nn vertices while supporting degree, adjacency, neighborhood and shortest path queries in optimal time in the Θ(logn)\Theta(\log n)-bit word RAM model. The degree query reports the number of incident edges to a given vertex in constant time, the adjacency query returns true if there is an edge between two vertices in constant time, the neighborhood query reports the set of all adjacent vertices in time proportional to the degree of the queried vertex, and the shortest path query returns a shortest path in time proportional to its length, thus the running times of these queries are optimal. Towards showing succinctness, we first show that at least nlogn2nloglognO(n)n\log{n} - 2n\log\log n - O(n) bits are necessary to represent any unlabeled interval graph GG with nn vertices, answering an open problem of Yang and Pippenger [Proc. Amer. Math. Soc. 2017]. This is augmented by a data structure of size nlogn+O(n)n\log{n} +O(n) bits while supporting not only the aforementioned queries optimally but also capable of executing various combinatorial algorithms (like proper coloring, maximum independent set etc.) on the input interval graph efficiently. Finally, we extend our ideas to other variants of interval graphs, for example, proper/unit interval graphs, k-proper and k-improper interval graphs, and circular-arc graphs, and design succinct/compact data structures for these graph classes as well along with supporting queries on them efficiently

    The distance-based critical node detection problem : models and algorithms

    Get PDF
    In the wake of terrorism and natural disasters, assessing networked systems for vulnerability to failures that arise from these events is essential to maintaining the operations of the systems. This is very crucial given the heavy dependence of daily social and economic activities on networked systems such as transport, telecommunication and energy networks as well as the interdependence of these networks. In this thesis, we explore methods to assess the vulnerability of networked systems to element failures which employ connectivity as the performance measure for vulnerability. The associated optimisation problem termed the critical node (edge) detection problem seeks to identify a subset of nodes (edges) of a network whose deletion (failure) optimises a network connectivity objective. Traditional connectivity measures employed in most studies of the critical node detection problem overlook internal cohesiveness of networks and the extent of connectivity in the network. This limits the effectiveness of the developed methods in uncovering vulnerability with regards to network connectivity. Our work therefore focuses on distance-based connectivity which is a fairly new class of connectivity introduced for studying the critical node detection problem to overcome the limitations of traditional connectivity measures. In Chapter 1, we provide an introduction outlining the motivations and the methods related to our study. In Chapter 2, we review the literature on the critical node detection problem as well as its application areas and related problems. Following this, we formally introduce the distance-based critical node detection problem in Chapter 3 where we propose new integer programming models for the case of hop-based distances and an efficient algorithm for the separation problems associated with the models. We also propose two families of valid inequalities. In Chapter 4, we consider the distance-based critical node detection problem using a heuristic approach in which we propose a centrality-based heuristic that employs a backbone crossover and a centrality-based neighbourhood search. In Chapter 5, we present generalisations of the methods proposed in Chapter 3 to edge-weighted graphs. We also introduce the edge-deletion version of the problem which we term the distance based critical edge detection problem. Throughout Chapters 3, 4 and 5, we provide computational experiments. Finally, in Chapter 6 we present conclusions as well future research directions. Keywords: Network Vulnerability, Critical Node Detection Problem, Distance-based Connectivity, Integer Programming, Lazy Constraints, Branch-and-cut, Heuristics.In the wake of terrorism and natural disasters, assessing networked systems for vulnerability to failures that arise from these events is essential to maintaining the operations of the systems. This is very crucial given the heavy dependence of daily social and economic activities on networked systems such as transport, telecommunication and energy networks as well as the interdependence of these networks. In this thesis, we explore methods to assess the vulnerability of networked systems to element failures which employ connectivity as the performance measure for vulnerability. The associated optimisation problem termed the critical node (edge) detection problem seeks to identify a subset of nodes (edges) of a network whose deletion (failure) optimises a network connectivity objective. Traditional connectivity measures employed in most studies of the critical node detection problem overlook internal cohesiveness of networks and the extent of connectivity in the network. This limits the effectiveness of the developed methods in uncovering vulnerability with regards to network connectivity. Our work therefore focuses on distance-based connectivity which is a fairly new class of connectivity introduced for studying the critical node detection problem to overcome the limitations of traditional connectivity measures. In Chapter 1, we provide an introduction outlining the motivations and the methods related to our study. In Chapter 2, we review the literature on the critical node detection problem as well as its application areas and related problems. Following this, we formally introduce the distance-based critical node detection problem in Chapter 3 where we propose new integer programming models for the case of hop-based distances and an efficient algorithm for the separation problems associated with the models. We also propose two families of valid inequalities. In Chapter 4, we consider the distance-based critical node detection problem using a heuristic approach in which we propose a centrality-based heuristic that employs a backbone crossover and a centrality-based neighbourhood search. In Chapter 5, we present generalisations of the methods proposed in Chapter 3 to edge-weighted graphs. We also introduce the edge-deletion version of the problem which we term the distance based critical edge detection problem. Throughout Chapters 3, 4 and 5, we provide computational experiments. Finally, in Chapter 6 we present conclusions as well future research directions. Keywords: Network Vulnerability, Critical Node Detection Problem, Distance-based Connectivity, Integer Programming, Lazy Constraints, Branch-and-cut, Heuristics
    corecore