34 research outputs found

    Timely Updates over an Erasure Channel

    Get PDF
    Using an age of information (AoI) metric, we examine the transmission of coded updates through a binary erasure channel to a monitor/receiver. We start by deriving the average status update age of an infinite incremental redundancy (IIR) system in which the transmission of a k-symbol update continuesuntil k symbols are received. This system is then compared to a fixed redundancy (FR) system in which each update is transmitted as an n symbol packet and the packet is successfully received if and only if at least k symbols are received. If fewer than k symbols are received, the update is discarded. Unlike the IIR system, the FR system requires no feedback from the receiver. For a single monitor system, we show that tuning the redundancy to the symbol erasure rate enables the FR system to perform as well as the IIR system. As the number of monitors is increased, the FR system outperforms the IIR system that guarantees delivery of all updates to all monitors

    On Maximum Weight Clique Algorithms, and How They Are Evaluated

    Get PDF
    Maximum weight clique and maximum weight independent set solvers are often benchmarked using maximum clique problem instances, with weights allocated to vertices by taking the vertex number mod 200 plus 1. For constraint programming approaches, this rule has clear implications, favouring weight-based rather than degree-based heuristics. We show that similar implications hold for dedicated algorithms, and that additionally, weight distributions affect whether certain inference rules are cost-effective. We look at other families of benchmark instances for the maximum weight clique problem, coming from winner determination problems, graph colouring, and error-correcting codes, and introduce two new families of instances, based upon kidney exchange and the Research Excellence Framework. In each case the weights carry much more interesting structure, and do not in any way resemble the 200 rule. We make these instances available in the hopes of improving the quality of future experiments

    Solving hard subgraph problems in parallel

    Get PDF
    This thesis improves the state of the art in exact, practical algorithms for finding subgraphs. We study maximum clique, subgraph isomorphism, and maximum common subgraph problems. These are widely applicable: within computing science, subgraph problems arise in document clustering, computer vision, the design of communication protocols, model checking, compiler code generation, malware detection, cryptography, and robotics; beyond, applications occur in biochemistry, electrical engineering, mathematics, law enforcement, fraud detection, fault diagnosis, manufacturing, and sociology. We therefore consider both the ``pure'' forms of these problems, and variants with labels and other domain-specific constraints. Although subgraph-finding should theoretically be hard, the constraint-based search algorithms we discuss can easily solve real-world instances involving graphs with thousands of vertices, and millions of edges. We therefore ask: is it possible to generate ``really hard'' instances for these problems, and if so, what can we learn? By extending research into combinatorial phase transition phenomena, we develop a better understanding of branching heuristics, as well as highlighting a serious flaw in the design of graph database systems. This thesis also demonstrates how to exploit two of the kinds of parallelism offered by current computer hardware. Bit parallelism allows us to carry out operations on whole sets of vertices in a single instruction---this is largely routine. Thread parallelism, to make use of the multiple cores offered by all modern processors, is more complex. We suggest three desirable performance characteristics that we would like when introducing thread parallelism: lack of risk (parallel cannot be exponentially slower than sequential), scalability (adding more processing cores cannot make runtimes worse), and reproducibility (the same instance on the same hardware will take roughly the same time every time it is run). We then detail the difficulties in guaranteeing these characteristics when using modern algorithmic techniques. Besides ensuring that parallelism cannot make things worse, we also increase the likelihood of it making things better. We compare randomised work stealing to new tailored strategies, and perform experiments to identify the factors contributing to good speedups. We show that whilst load balancing is difficult, the primary factor influencing the results is the interaction between branching heuristics and parallelism. By using parallelism to explicitly offset the commitment made to weak early branching choices, we obtain parallel subgraph solvers which are substantially and consistently better than the best sequential algorithms

    Engineering Algorithms for Dynamic and Time-Dependent Route Planning

    Get PDF
    Efficiently computing shortest paths is an essential building block of many mobility applications, most prominently route planning/navigation devices and applications. In this thesis, we apply the algorithm engineering methodology to design algorithms for route planning in dynamic (for example, considering real-time traffic) and time-dependent (for example, considering traffic predictions) problem settings. We build on and extend the popular Contraction Hierarchies (CH) speedup technique. With a few minutes of preprocessing, CH can optimally answer shortest path queries on continental-sized road networks with tens of millions of vertices and edges in less than a millisecond, i.e. around four orders of magnitude faster than Dijkstra’s algorithm. CH already has been extended to dynamic and time-dependent problem settings. However, these adaptations suffer from limitations. For example, the time-dependent variant of CH exhibits prohibitive memory consumption on large road networks with detailed traffic predictions. This thesis contains the following key contributions: First, we introduce CH-Potentials, an A*-based routing framework. CH-Potentials computes optimal distance estimates for A* using CH with a lower bound weight function derived at preprocessing time. The framework can be applied to any routing problem where appropriate lower bounds can be obtained. The achieved speedups range between one and three orders of magnitude over Dijkstra’s algorithm, depending on how tight the lower bounds are. Second, we propose several improvements to Customizable Contraction Hierarchies (CCH), the CH adaptation for dynamic route planning. Our improvements yield speedups of up to an order of magnitude. Further, we augment CCH to efficiently support essential extensions such as turn costs, alternative route computation and point-of-interest queries. Third, we present the first space-efficient, fast and exact speedup technique for time-dependent routing. Compared to the previous time-dependent variant of CH, our technique requires up to 40 times less memory, needs at most a third of the preprocessing time, and achieves only marginally slower query running times. Fourth, we generalize A* and introduce time-dependent A* potentials. This allows us to design the first approach for routing with combined live and predicted traffic, which achieves interactive running times for exact queries while allowing live traffic updates in a fraction of a minute. Fifth, we study extended problem models for routing with imperfect data and routing for truck drivers and present efficient algorithms for these variants. Sixth and finally, we present various complexity results for non-FIFO time-dependent routing and the extended problem models

    Eight Biennial Report : April 2005 – March 2007

    No full text

    A study on the Probabilistic Interval-based Event Calculus

    Get PDF
    Η Αναγνώριση Σύνθετων Γεγονότων είναι το πεδίο εκείνο της Τεχνητής Νοημοσύνης το οποίο αποσκοπεί στο σχεδιασμό και την κατασκευή συστημάτων τα οποία επεξεργάζονται γρήγορα μεγάλες και πιθανώς ετερογενείς ροές δεδομένων και τα οποία είναι σε θέση να αναγνωρίζουν εγκαίρως μη τετριμμένα και ενδιαφέροντα συμβάντα, βάσει κατάλληλων ορισμών που προέρχονται από ειδικούς. Σκοπός ενός τέτοιου συστήματος είναι η αυτοματοποιημένη εποπτεία πολύπλοκων και απαιτητικών καταστάσεων και η υποβοήθηση της λήψης αποφάσεων από τον άνθρωπο. Η αβεβαιότητα και ο θόρυβος είναι έννοιες που υπεισέρχονται φυσικά σε τέτοιες ροές δεδομένων και συνεπώς, καθίσταται απαραίτητη η χρήση της Θεωρίας Πιθανοτήτων για την αντιμετώπισή τους. Η πιθανοτική Αναγνώριση Σύνθετων Γεγονότων μπορεί να πραγματοποιηθεί σε επίπεδο χρονικής στιγμής ή σε επίπεδο χρονικού διαστήματος. Η παρούσα εργασία εστιάζει στον PIEC, έναν σύγχρονο αλγόριθμο για την Αναγνώριση Σύνθετων Γεγονότων με τη χρήση πιθανοτικών, μέγιστων διαστημάτων. Αρχικά παρουσιάζουμε τον αλγόριθμο και τον ερευνούμε ενδελεχώς. Μελετούμε την ορθότητά του μέσα από μια σειρά μαθηματικών αποδείξεων περί της ευρωστίας (soundness) και της πληρότητάς του (completeness). Κατόπιν, παραθέτουμε εκτενή πειραματική αποτίμηση του υπό μελέτη αλγορίθμου και σύγκρισή του με συστήματα πιθανοτικής Αναγνώρισης Γεγονότων σε επίπεδο χρονικών σημείων. Τα αποτελέσματά μας δείχνουν ότι ο PIEC επιδεικνύει σταθερά καλύτερη Ανάκληση (Recall), παρουσιάζοντας, ωστόσο κάποιες απώλειες σε Ακρίβεια (Precision) σε ορισμένες περιπτώσεις. Για τον λόγο αυτόν, εμβαθύνουμε και εξετάζουμε συγκεκριμένες περιπτώσεις στις οποίες ο PIEC αποδίδει καλύτερα, καθώς και άλλες στις οποίες παράγει αποτελέσματα υποδεέστερα των παραδοσιακών μεθόδων σημειακής αναγνώρισης, σε μια προσπάθεια να εντοπίσουμε και να διατυπώσουμε τις δυνατότητες αλλά και τις αδυναμίες του αλγορίθμου. Τέλος, θέτουμε τις γενικές κατευθυντήριες γραμμές για περαιτέρω έρευνα στο εν λόγω ζήτημα, τμήματα της οποίας βρίσκονται ήδη σε εξέλιξη.Complex Event Recognition is the subdivision of Artificial Intelligence that aims to design and construct systems that quickly process large and often heterogeneous streams of data and timely deduce – based on definitions set by domain experts – the occurrence of non-trivial and interesting incidents. The purpose of such systems is to provide useful insights into involved and demanding situations that would otherwise be difficult to monitor, and to assist decision making. Uncertainty and noise are inherent in such data streams and therefore, Probability Theory becomes necessary in order to deal with them. The probabilistic recognition of Complex Events can be done in a timepoint-based or an interval-based manner. This thesis focuses on PIEC, a state-of-the-art probabilistic, interval-based Complex Event Recognition algorithm. We present the algorithm and examine it in detail. We study its correctness through a series of mathematical proofs of its soundness and completeness. Afterwards, we provide thorough experimental evaluation and comparison to point-based probabilistic Event Recognition methods. Our evaluation shows that PIEC consistently displays better Recall measures, often at the expense of a generally worse Precision. We then focus on cases where PIEC performs significantly better and cases where it falls short, in an effort to detect and state its main strengths and weaknesses. We also set the general directions for further research on the topic, parts of which are already in progress

    Automated self-assembly programming paradigm

    Get PDF
    Self-assembly is a ubiquitous process in nature in which a disordered set of components autonomously assemble into a complex and more ordered structure. Components interact with each other without the presence of central control or external intervention. Self-assembly is a rapidly growing research topic and has been studied in various domains including nano-science and technology, robotics, micro-electro-mechanical systems, etc. Software self-assembly, on the other hand, has been lacking in research efforts. In this research, I introduced Automated Self-Assembly Programming Paradigm (ASAP²), a software self-assembly system whereby a set of human made components are collected in a software repository and later integrated through self-assembly into a specific software architecture. The goal of this research is to push the understanding of software self-assembly and investigate if it can complement current automatic programming approaches such as Genetic Programming. The research begins by studying the behaviour of unguided software self-assembly, a process loosely inspired by ideal gases. The effect of the externally defined environmental parameters are then examined against the diversity of the assembled programs and the time needed for the system to reach its equilibrium. These analysis on software self-assembly then leads to a further investigation by using a particle swarm optimization based embodiment for ASAP². In addition, a family of network structures is studied to examine how various network properties affect the course and result of software self-assembly. The thesis ends by examining software self-assembly far from equilibrium, embedded in assorted network structures. The main contributions of this thesis are: (1) a literature review on various approaches to the design of self-assembly systems, as well as some popular automatic programming approaches such as Genetic Programming; (2) a software self-assembly model in which software components move and interact with each other and eventually autonomously assemble into programs. This self-assembly process is an entirely new approach to automatic programming; (3) a detailed investigation on how the process and results of software self-assembly can be affected. This is tackled by deploying a variety of embodiments as well as a range of externally defined environmental variables. To the best of my knowledge, this is the first study on software self-assembly

    Reducing the risk of e-mail phishing in the state of Qatar through an effective awareness framework

    Get PDF
    In recent years, cyber crime has focused intensely on people to bypass existing sophisticated security controls; phishing is one of the most common forms of such attack. This research highlights the problem of e-mail phishing. A lot of previous research demonstrated the danger of phishing and its considerable consequences. Since users behaviour is unpredictable, there is no reliable technological protective solution (e.g. spam filters, anti-viruses) to diminish the risk arising from inappropriate user decisions. Therefore, this research attempts to reduce the risk of e-mail phishing through awareness and education. It underlines the problem of e-mail phishing in the State of Qatar, one of world s fastest developing countries and seeks to provide a solution to enhance people s awareness of e-mail phishing by developing an effective awareness and educational framework. The framework consists of valuable recommendations for the Qatar government, citizens and organisations responsible for ensuring information security along with an educational agenda to train them how to identify and avoid phishing attempts. The educational agenda supports users in making better trust decisions to avoid phishing that could complement any technical solutions. It comprises a collection of training methods: conceptual, embedded, e-learning and learning programmes which include a television show and a learning session with a variety of teaching components such as a game, quizzes, posters, cartoons and a presentation. The components were tested by trial in two Qatari schools and evaluated by experts and a representative sample of Qatari citizens. Furthermore, the research proves the existence and extent of the e-mail phishing problem in Qatar in comparison with the UK where people were found to be less vulnerable and more aware. It was discovered that Qatar is an attractive place for phishers and that a lack of awareness and e-law made Qatar more vulnerable to the phishing. The research identifies the factors which make Qatari citizens susceptible to e-mail phishing attacks such as cultural, country-specific factors, interests and beliefs, religion effect and personal characteristics and this identified the need for enhancing Qatari s level of awareness on phishing threat. Since literature on phishing in Qatar is sparse, empirical and non-empirical studies involved a variety of surveys, interviews and experiments. The research successfully achieved its aim and objectives and is now being considered by the Qatari Government
    corecore