30 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    LIPIcs, Volume 248, ISAAC 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 248, ISAAC 2022, Complete Volum

    Traveling Salesman Problem

    Get PDF
    This book is a collection of current research in the application of evolutionary algorithms and other optimal algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy Maps, Chaotic Maps and Parallelized TSP are also presented. Most importantly, this book presents both theoretical as well as practical applications of TSP, which will be a vital tool for researchers and graduate entry students in the field of applied Mathematics, Computing Science and Engineering

    Traveling Salesman Problem

    Get PDF
    The idea behind TSP was conceived by Austrian mathematician Karl Menger in mid 1930s who invited the research community to consider a problem from the everyday life from a mathematical point of view. A traveling salesman has to visit exactly once each one of a list of m cities and then return to the home city. He knows the cost of traveling from any city i to any other city j. Thus, which is the tour of least possible cost the salesman can take? In this book the problem of finding algorithmic technique leading to good/optimal solutions for TSP (or for some other strictly related problems) is considered. TSP is a very attractive problem for the research community because it arises as a natural subproblem in many applications concerning the every day life. Indeed, each application, in which an optimal ordering of a number of items has to be chosen in a way that the total cost of a solution is determined by adding up the costs arising from two successively items, can be modelled as a TSP instance. Thus, studying TSP can never be considered as an abstract research with no real importance

    Efficient local search for Pseudo Boolean Optimization

    Get PDF
    Algorithms and the Foundations of Software technolog

    A study on the Probabilistic Interval-based Event Calculus

    Get PDF
    Η Αναγνώριση Σύνθετων Γεγονότων είναι το πεδίο εκείνο της Τεχνητής Νοημοσύνης το οποίο αποσκοπεί στο σχεδιασμό και την κατασκευή συστημάτων τα οποία επεξεργάζονται γρήγορα μεγάλες και πιθανώς ετερογενείς ροές δεδομένων και τα οποία είναι σε θέση να αναγνωρίζουν εγκαίρως μη τετριμμένα και ενδιαφέροντα συμβάντα, βάσει κατάλληλων ορισμών που προέρχονται από ειδικούς. Σκοπός ενός τέτοιου συστήματος είναι η αυτοματοποιημένη εποπτεία πολύπλοκων και απαιτητικών καταστάσεων και η υποβοήθηση της λήψης αποφάσεων από τον άνθρωπο. Η αβεβαιότητα και ο θόρυβος είναι έννοιες που υπεισέρχονται φυσικά σε τέτοιες ροές δεδομένων και συνεπώς, καθίσταται απαραίτητη η χρήση της Θεωρίας Πιθανοτήτων για την αντιμετώπισή τους. Η πιθανοτική Αναγνώριση Σύνθετων Γεγονότων μπορεί να πραγματοποιηθεί σε επίπεδο χρονικής στιγμής ή σε επίπεδο χρονικού διαστήματος. Η παρούσα εργασία εστιάζει στον PIEC, έναν σύγχρονο αλγόριθμο για την Αναγνώριση Σύνθετων Γεγονότων με τη χρήση πιθανοτικών, μέγιστων διαστημάτων. Αρχικά παρουσιάζουμε τον αλγόριθμο και τον ερευνούμε ενδελεχώς. Μελετούμε την ορθότητά του μέσα από μια σειρά μαθηματικών αποδείξεων περί της ευρωστίας (soundness) και της πληρότητάς του (completeness). Κατόπιν, παραθέτουμε εκτενή πειραματική αποτίμηση του υπό μελέτη αλγορίθμου και σύγκρισή του με συστήματα πιθανοτικής Αναγνώρισης Γεγονότων σε επίπεδο χρονικών σημείων. Τα αποτελέσματά μας δείχνουν ότι ο PIEC επιδεικνύει σταθερά καλύτερη Ανάκληση (Recall), παρουσιάζοντας, ωστόσο κάποιες απώλειες σε Ακρίβεια (Precision) σε ορισμένες περιπτώσεις. Για τον λόγο αυτόν, εμβαθύνουμε και εξετάζουμε συγκεκριμένες περιπτώσεις στις οποίες ο PIEC αποδίδει καλύτερα, καθώς και άλλες στις οποίες παράγει αποτελέσματα υποδεέστερα των παραδοσιακών μεθόδων σημειακής αναγνώρισης, σε μια προσπάθεια να εντοπίσουμε και να διατυπώσουμε τις δυνατότητες αλλά και τις αδυναμίες του αλγορίθμου. Τέλος, θέτουμε τις γενικές κατευθυντήριες γραμμές για περαιτέρω έρευνα στο εν λόγω ζήτημα, τμήματα της οποίας βρίσκονται ήδη σε εξέλιξη.Complex Event Recognition is the subdivision of Artificial Intelligence that aims to design and construct systems that quickly process large and often heterogeneous streams of data and timely deduce – based on definitions set by domain experts – the occurrence of non-trivial and interesting incidents. The purpose of such systems is to provide useful insights into involved and demanding situations that would otherwise be difficult to monitor, and to assist decision making. Uncertainty and noise are inherent in such data streams and therefore, Probability Theory becomes necessary in order to deal with them. The probabilistic recognition of Complex Events can be done in a timepoint-based or an interval-based manner. This thesis focuses on PIEC, a state-of-the-art probabilistic, interval-based Complex Event Recognition algorithm. We present the algorithm and examine it in detail. We study its correctness through a series of mathematical proofs of its soundness and completeness. Afterwards, we provide thorough experimental evaluation and comparison to point-based probabilistic Event Recognition methods. Our evaluation shows that PIEC consistently displays better Recall measures, often at the expense of a generally worse Precision. We then focus on cases where PIEC performs significantly better and cases where it falls short, in an effort to detect and state its main strengths and weaknesses. We also set the general directions for further research on the topic, parts of which are already in progress

    27th Annual European Symposium on Algorithms: ESA 2019, September 9-11, 2019, Munich/Garching, Germany

    Get PDF

    Parallel and External High Quality Graph Partitioning

    Get PDF
    Partitioning graphs into k blocks of roughly equal size such that few edges run between the blocks is a key tool for processing and analyzing large complex real-world networks. The graph partitioning problem has multiple practical applications in parallel and distributed computations, data storage, image processing, VLSI physical design and many more. Furthermore, recently, size, variety, and structural complexity of real-world networks has grown dramatically. Therefore, there is a demand for efficient graph partitioning algorithms that fully utilize computational power and memory capacity of modern machines. A popular and successful heuristic to compute a high-quality partitions of large networks in reasonable time is multi-level graph partitioning\textit{multi-level graph partitioning} approach which contracts the graph preserving its structure and then partitions it using a complex graph partitioning algorithm. Specifically, the multi-level graph partitioning approach consists of three main phases: coarsening, initial partitioning, and uncoarsening. During the coarsening phase, the graph is recursively contracted preserving its structure and properties until it is small enough to compute its initial partition during the initial partitioning phase. Afterwards, during the uncoarsening phase the partition of the contracted graph is projected onto the original graph and refined using, for example, local search. Most of the research on heuristical graph partitioning focuses on sequential algorithms or parallel algorithms in the distributed memory model. Unfortunately, previous approaches to graph partitioning are not able to process large networks and rarely take in into account several aspects of modern computational machines. Specifically, the amount of cores per chip grows each year as well as the price of RAM reduces slower than the real-world graphs grow. Since HDDs and SSDs are 50 – 400 times cheaper than RAM, external memory makes it possible to process large real-world graphs for a reasonable price. Therefore, in order to better utilize contemporary computational machines, we develop efficient multi-level graph partitioning\textit{multi-level graph partitioning} algorithms for the shared-memory and the external memory models. First, we present an approach to shared-memory parallel multi-level graph partitioning that guarantees balanced solutions, shows high speed-ups for a variety of large graphs and yields very good quality independently of the number of cores used. Important ingredients include parallel label propagation for both coarsening and uncoarsening, parallel initial partitioning, a simple yet effective approach to parallel localized local search, and fast locality preserving hash tables that effectively utilizes caches. The main idea of the parallel localized local search is that each processors refines only a small area around a random vertex reducing interactions between processors. For example, on 79 cores, our algorithms partitions a graph with more than 3 billions of edges into 16 blocks cutting 4.5% less edges than the closest competitor and being more than two times faster. Furthermore, another competitors is not able to partition this graph. We then present an approach to external memory graph partitioning that is able to partition large graphs that do not fit into RAM. Specifically, we consider the semi-external and the external memory model. In both models a data structure of size proportional to the number of edges does not fit into the RAM. The difference is that the former model assumes that a data structure of size proportional to the number of vertices fits into the RAM whereas the latter assumes the opposite. We address the graph partitioning problem in both models by adapting the size-constrained label propagation technique for the semi-external model and by developing a size-constrained clustering algorithm based on graph coloring in the external memory. Our semi-external size-constrained label propagation algorithm (or external memory clustering algorithm) can be used to compute graph clusterings and is a prerequisite for the (semi-)external graph partitioning algorithm. The algorithms are then used for both the coarsening and the uncoarsening phase of a multi-level algorithm to compute graph partitions. Our (semi-)external algorithm is able to partition and cluster huge complex networks with billions of edges on cheap commodity machines. Experiments demonstrate that the semi-external graph partitioning algorithm is scalable and can compute high quality partitions in time that is comparable to the running time of an efficient internal memory implementation. A parallelization of the algorithm in the semi-external model further reduces running times. Additionally, we develop a speed-up technique for the hypergraph partitioning algorithms. Hypergraphs are an extension of graphs that allow a single edge to connect more than two vertices. Therefore, they describe models and processes more accurately additionally allowing more possibilities for improvement. Most multi-level hypergraph partitioning algorithms perform some computations on vertices and their set of neighbors. Since these computations can be super-linear, they have a significant impact on the overall running time on large hypergraphs. Therefore, to further reduce the size of hyperedges, we develop a pin-sparsifier based on the min-hash technique that clusters vertices with similar neighborhood. Further, vertices that belong to the same cluster are substituted by one vertex, which is connected to their neighbors, therefore, reducing the size of the hypergraph. Our algorithm sparsifies a hypergraph such that the resulting graph can be partitioned significantly faster without loss in quality (or with insignificant loss). On average, KaHyPar with sparsifier performs partitioning about 1.5 times faster while preserving solution quality if hyperedges are large. All aforementioned frameworks are publicly available
    corecore