25 research outputs found

    Warshall’s algorithm—survey and applications

    Get PDF
    This survey presents the well-known Warshall’s algorithm, a generalization and some interesting applications: transitive closure of relations, distances between vertices in graphs, number of paths in acyclic digraphs, all paths in digraphs, scattered complexity for rainbow words, special walks in finite automata

    Warshall’s algorithm—survey and applications

    Get PDF
    This survey presents the well-known Warshall’s algorithm, a generalization and some interesting applications: transitive closure of relations, distances between vertices in graphs, number of paths in acyclic digraphs, all paths in digraphs, scattered complexity for rainbow words, special walks in finite automata

    РАЗНОРОДНЫЙ БЛОЧНЫЙ АЛГОРИТМ ПОИСКА КРАТЧАЙШИХ ПУТЕЙ МЕЖДУ ВСЕМИ ПАРАМИ ВЕРШИН ГРАФА

    Get PDF
    The problem of finding the shortest paths between all pairs of vertices in a weighted directed graph is considered. The algorithms of Dijkstra and Floyd-Warshall, homogeneous block and parallel algorithms and other algorithms of solving this problem are known. A new heterogeneous block algorithm is proposed which considers various types of blocks and takes into account the shared hierarchical memory organization and multi-core processors for calculating each type of block. The proposed heterogeneous block computing algorithms are compared with the generally accepted homogeneous universal block calculation algorithm at theoretical and experimental levels. The main emphasis is on using the nature of the heterogeneity, the interaction of blocks during computation and the variation in block size, the size of the block matrix and the total number of blocks in order to identify the possibility of reducing the amount of computation performed during the calculation of the block, reducing the activity of the processor’s cache memory and determining the influence of the calculation time of each block type on the total execution time of the heterogeneous block algorithm. A recurrent resynchronized algorithm for calculating the diagonal block (D0) is proposed, which improves the use of the processor’s cache and reduces the number of iterations up to 3 times that are necessary to calculate the diagonal block, which implies the acceleration in calculating the diagonal block up to 60%. For more efficient work with the cache memory, variants of permutation of the basic loops k-i-j in the algorithms of calculating the blocks of the cross (C1 and C2) and the updated blocks (U3) are proposed. These permutations in combination with the proposed algorithm for calculating the diagonal block reduce the total runtime of the heterogeneous block algorithm to 13% on average against the homogeneous block algorithm. Рассматривается проблема поиска кратчайших путей между всеми парами вершин взвешенного ориентированного графа. Известны алгоритмы Дейкстры и Флойда-Уоршелла, однородные блочные и параллельные алгоритмы и другие алгоритмы решения этой проблемы. Предлагается новый разнородный блочный алгоритм, рассматривающий различные типы блоков и учитывающий разделяемую иерархическую организации памяти и многоядерность процессоров при вычислении блока каждого типа. На теоретическом и экспериментальном уровнях проводится сравнение предлагаемых разнородных алгоритмов вычисления блоков с общепринятым однородным универсальным алгоритмом пересчета блока. Основной акцент делается на использовании вариантов неоднородности, взаимодействия блоков во время вычислений и вариаций в размере блока, размере матрицы блоков и общего количества блоков с целью выявления возможности сокращения объема вычислений, производимых при расчете блока, сокращения активности работы с кэш памятью процессора и выявления влияния времени расчета каждого типа блока на общее время выполнения разнородного блочного алгоритма. Предложен рекуррентный ресинхронизированный алгоритм расчета диагонального блока (D0), улучшающий использование кэш памяти процессора и сокращающий количество итераций и размер данных, необходимых для расчёта диагонального блока до 3 раз, что дает ускорение в расчете диагонального блока до 60%. Для более эффективной работы с кэш памятью предложены варианты перестановки основных циклов k-i-j алгоритмов расчета блоков креста (C1, C2) и обновляемых блоков (U3), использование которых в комбинации с алгоритмом расчета диагонального блока сокращает общее время работы разнородного блочного алгоритма на 13% в среднем по сравнению с однородным блочным алгоритмом

    Symbolic Algorithms for Language Equivalence and Kleene Algebra with Tests

    Get PDF
    We first propose algorithms for checking language equivalence of finite automata over a large alphabet. We use symbolic automata, where the transition function is compactly represented using a (multi-terminal) binary decision diagrams (BDD). The key idea consists in computing a bisimulation by exploring reachable pairs symbolically, so as to avoid redundancies. This idea can be combined with already existing optimisations, and we show in particular a nice integration with the disjoint sets forest data-structure from Hopcroft and Karp's standard algorithm. Then we consider Kleene algebra with tests (KAT), an algebraic theory that can be used for verification in various domains ranging from compiler optimisation to network programming analysis. This theory is decidable by reduction to language equivalence of automata on guarded strings, a particular kind of automata that have exponentially large alphabets. We propose several methods allowing to construct symbolic automata out of KAT expressions, based either on Brzozowski's derivatives or standard automata constructions. All in all, this results in efficient algorithms for deciding equivalence of KAT expressions

    Provenance à base de semi-anneaux pour les bases de données graphes

    Get PDF
    The growing amount of data collected by sensors or generated by human interaction has led to an increasing use of graph databases, an efficient model for representing intricate data.Techniques to keep track of the history of computations applied to the data inside classical relational database systems are also topical because of their application to enforce Data Protection Regulations (e.g., GDPR).Our research work mixes the two by considering a semiring-based provenance model for navigational queries over graph databases.We first present a comprehensive survey on semiring theory and their applications in different fields of computer sciences, geared towards their relevance for our context. From the richness of the literature, we notably obtain a lower bound for the complexity of the full provenance computation in our setting.In a second part, we focus on the model itself by introducing a toolkit of provenance-aware algorithms, each targeting specific properties of the semiring of use.We notably introduce a new method based on lattice theory permitting an efficient provenance computation for complex graph queries.We propose an open-source implementation of the above-mentioned algorithms, and we conduct an experimental study over real transportation networks of large size, witnessing the practical efficiency of our approach in practical scenarios.We finally consider how this framework is positioned compared to other provenance models such as the semiring-based Datalog provenance model.We make explicit how the methods we applied for graph databases can be extended to Datalog queries, and we show how they can be seen as an extension of the semi-naïve evaluation strategy.To leverage this fact, we extend the capabilities of Soufflé, a state-of-the-art Datalog solver, to design an efficient provenance-aware Datalog evaluator. Experimental results based on our open-source implementation entail the fact this approach stays competitive with dedicated graph solutions, despite being more general.In a final round, we discuss on some research ideas for improving the model, and state open questions raised by our work.L'augmentation du volume de données collectées par des capteurs et générées par des interactions humaines a mené à l'utilisation des bases de données orientées graphes en tant que modèle de représentation efficace pour les données complexes.Les techniques permettant de tracer les calculs qui ont été appliqués aux données au sein d'une base de données relationnelle classique sont sur le devant de la scène, notamment grâce à leur utilité pourfaire respecter les régulations sur les données privées telles que le RGPD en Union Européenne.Notre travail de recherche croise ces deux problématiques en s'intéressant à un modèle de provenance à base de semi-anneaux pour les requêtes navigationnelles.Nous commençons par présenter une étude approfondie de la théorie des semi-anneaux et de leurs applications au sein des sciences informatiques en se concentrant sur les résultats ayant un intérêt direct pour notre travail de recherche.La richesse de la littérature sur le domaine nous a notamment permis d'obtenir une borne inférieure sur la complexité de notre modèle.Dans une seconde partie, nous étudions le modèle en lui-même et introduisons un ensemble cohérent d'algorithmes permettant d'effectuer des calculs de provenance et adaptés aux propriétés des semi-anneaux utilisés.Nous introduisons notablement une nouvelle méthode basée sur la théorie des treillis permettant de calculer la provenance pour des requêtes complexes.Nous proposons une implémentation open-source de ces algorithmes et faisons une étude expérimentale sur de larges réseaux de transport issus de la vie réelle pour attester de l'efficacité pratique de notre approche.On s'intéresse finalement au positionnement de ce cadre de travail par rapport à d'autres modèles de provenance à base de semi-anneaux. Nous nous intéressons à Datalog en particulier.Nous démontrons que les méthodes que nous avons développées pour les bases de données orientées graphes peuvent se généraliser sur des requêtes Datalog. Nous montrons de plus qu'elles peuvent être vues comme des généralisations de la méthode semi-naïve.En se basant sur ce fait-là, nous étendons les capacités de Soufflé, un évaluateur Datalog appartenant à l'état de l'art, afin d'effectuer des calculs de provenance pour des requêtes Datalog.Les études expérimentales basées sur cette implémentation open-source confirment que cette approche reste compétitive avec les solutions spécifiques pour les graphes, mais tout en étant plus générale.Nous terminons par une discussion sur les améliorations possibles du modèle et énonçons les questions ouvertes qui ont été soulevées au cours de ce travail

    GIS and optimisation:potential benefits for emergency facility location in humanitarian logistics

    Get PDF
    Floods are one of the most dangerous and common disasters worldwide, and these disasters are closely linked to the geography of the affected area. As a result, several papers in the academic field of humanitarian logistics have incorporated the use of Geographical Information Systems (GIS) for disaster management. However, most of the contributions in the literature are using these systems for network analysis and display, with just a few papers exploiting the capabilities of GIS to improve planning and preparedness. To show the capabilities of GIS for disaster management, this paper uses raster GIS to analyse potential flooding scenarios and provide input to an optimisation model. The combination is applied to two real-world floods in Mexico to evaluate the value of incorporating GIS for disaster planning. The results provide evidence that including GIS analysis for a decision-making tool in disaster management can improve the outcome of disaster operations by reducing the number of facilities used at risk of flooding. Empirical results imply the importance of the integration of advanced remote sensing images and GIS for future systems in humanitarian logistics

    The Open Algebraic Path Problem

    Get PDF
    The algebraic path problem provides a general setting for shortest path algorithms in optimization and computer science. We explain the universal property of solutions to the algebraic path problem by constructing a left adjoint functor whose values are given by these solutions. This paper extends the algebraic path problem to networks equipped with input and output boundaries. We show that the algebraic path problem is functorial as a mapping from a double category whose horizontal composition is gluing of open networks. We introduce functional open matrices, for which the functoriality of the algebraic path problem has a more practical expression

    Mechanising an algebraic rely-guarantee refinement calculus

    Get PDF
    PhD ThesisDespite rely-guarantee (RG) being a well-studied program logic established in the 1980s, it was not until recently that researchers realised that rely and guarantee conditions could be treated as independent programming constructs. This recent reformulation of RG paved the way to algebraic characterisations which have helped to better understand the difficulties that arise in the practical application of this development approach. The primary focus of this thesis is to provide automated tool support for a rely-guarantee refinement calculus proposed by Hayes et. al., where rely and guarantee are defined as independent commands. Our motivation is to investigate the application of an algebraic approach to derive concrete examples using this calculus. In the course of this thesis, we locate and fix a few issues involving the refinement language, its operational semantics and preexisting proofs. Moreover, we extend the refinement calculus of Hayes et. al. to cover indexed parallel composition, non-atomic evaluation of expressions within specifications, and assignment to indexed arrays. These extensions are illustrated via concrete examples. Special attention is given to design decisions that simplify the application of the mechanised theory. For example, we leave part of the design of the expression language on the hands of the user, at the cost of the requiring the user to define the notion of undefinedness for unary and binary operators; and we also formalise a notion of indexed parallelism that is parametric on the type of the indexes, this is done deliberately to simplify the formalisation of algorithms. Additionally, we use stratification to reduce the number of cases in in simulation proofs involving the operational semantics. Finally, we also use the algebra to discuss the role of types in program derivation

    IST Austria Thesis

    Get PDF
    This dissertation focuses on algorithmic aspects of program verification, and presents modeling and complexity advances on several problems related to the static analysis of programs, the stateless model checking of concurrent programs, and the competitive analysis of real-time scheduling algorithms. Our contributions can be broadly grouped into five categories. Our first contribution is a set of new algorithms and data structures for the quantitative and data-flow analysis of programs, based on the graph-theoretic notion of treewidth. It has been observed that the control-flow graphs of typical programs have special structure, and are characterized as graphs of small treewidth. We utilize this structural property to provide faster algorithms for the quantitative and data-flow analysis of recursive and concurrent programs. In most cases we make an algebraic treatment of the considered problem, where several interesting analyses, such as the reachability, shortest path, and certain kind of data-flow analysis problems follow as special cases. We exploit the constant-treewidth property to obtain algorithmic improvements for on-demand versions of the problems, and provide data structures with various tradeoffs between the resources spent in the preprocessing and querying phase. We also improve on the algorithmic complexity of quantitative problems outside the algebraic path framework, namely of the minimum mean-payoff, minimum ratio, and minimum initial credit for energy problems. Our second contribution is a set of algorithms for Dyck reachability with applications to data-dependence analysis and alias analysis. In particular, we develop an optimal algorithm for Dyck reachability on bidirected graphs, which are ubiquitous in context-insensitive, field-sensitive points-to analysis. Additionally, we develop an efficient algorithm for context-sensitive data-dependence analysis via Dyck reachability, where the task is to obtain analysis summaries of library code in the presence of callbacks. Our algorithm preprocesses libraries in almost linear time, after which the contribution of the library in the complexity of the client analysis is (i)~linear in the number of call sites and (ii)~only logarithmic in the size of the whole library, as opposed to linear in the size of the whole library. Finally, we prove that Dyck reachability is Boolean Matrix Multiplication-hard in general, and the hardness also holds for graphs of constant treewidth. This hardness result strongly indicates that there exist no combinatorial algorithms for Dyck reachability with truly subcubic complexity. Our third contribution is the formalization and algorithmic treatment of the Quantitative Interprocedural Analysis framework. In this framework, the transitions of a recursive program are annotated as good, bad or neutral, and receive a weight which measures the magnitude of their respective effect. The Quantitative Interprocedural Analysis problem asks to determine whether there exists an infinite run of the program where the long-run ratio of the bad weights over the good weights is above a given threshold. We illustrate how several quantitative problems related to static analysis of recursive programs can be instantiated in this framework, and present some case studies to this direction. Our fourth contribution is a new dynamic partial-order reduction for the stateless model checking of concurrent programs. Traditional approaches rely on the standard Mazurkiewicz equivalence between traces, by means of partitioning the trace space into equivalence classes, and attempting to explore a few representatives from each class. We present a new dynamic partial-order reduction method called the Data-centric Partial Order Reduction (DC-DPOR). Our algorithm is based on a new equivalence between traces, called the observation equivalence. DC-DPOR explores a coarser partitioning of the trace space than any exploration method based on the standard Mazurkiewicz equivalence. Depending on the program, the new partitioning can be even exponentially coarser. Additionally, DC-DPOR spends only polynomial time in each explored class. Our fifth contribution is the use of automata and game-theoretic verification techniques in the competitive analysis and synthesis of real-time scheduling algorithms for firm-deadline tasks. On the analysis side, we leverage automata on infinite words to compute the competitive ratio of real-time schedulers subject to various environmental constraints. On the synthesis side, we introduce a new instance of two-player mean-payoff partial-information games, and show how the synthesis of an optimal real-time scheduler can be reduced to computing winning strategies in this new type of games
    corecore