72 research outputs found

    Robust Communications in Erlang

    Get PDF
    Erlang is a dynamically-typed functional and concurrent programming language lauded by its proponents for its relatively simple syntax, process isolation, and fault tolerance. The functional aspect has rich features like pattern matching and tail-call optimisation, while the concurrent aspect uses isolated processes and asynchronous message passing to share state between system components. The two meet with pattern matching on mailboxes, which allows for a process to pick a message from its mailbox - potentially out of order - based on its structure, value, type, or a mixture thereof. A strongly and dynamically typed language like Erlang can experience many kinds of runtime errors, such as ill-typed operands to arithmetic operators. The interaction between Erlang's type system and process mailboxes can lead to a more subtle runtime error which is harder to detect: orphan messages. As the types of messages are not checked either at compile time or runtime, a process can be sent a message which it will never receive. Essentially, non-trivial type discrepancies in Erlang programs can cause subtle bugs when communication is involved. These problems can be hard to detect and fix, with current solutions such as extensive testing and exhaustive model checking. This thesis reports on work to detect communication-related type discrepancies in Erlang programs. A fragment of the Core Erlang intermediate format is modelled formally so that we can reason about the out-of-order communication in Erlang systems, particularly the dependencies between sent messages when determining whether orphan messages exist. Afterwards, a sub-typing relation based on Erlang's type system is introduced to clearly define the notion of an orphan message, forming the foundation of a system for automatic detection via a mix of static analysis and runtime verification. This culminates in automatic tooling to detect certain cases of communication discrepancies via static analysis, and automatic instrumentation of concurrent programs to detect and recover from more complicated cases at runtime

    Heuristics for the refinement of assumptions in generalized reactivity formulae

    Get PDF
    Reactive synthesis is concerned with automatically generating implementations from formal specifications. These specifications are typically written in the language of generalized reactivity (GR(1)), a subset of linear temporal logic capable of expressing the most common industrial specification patterns, and describe the requirements about the behavior of a system under assumptions about the environment where the system is to be deployed. Oftentimes no implementation exists which guarantees the required behavior under all possible environments, typically due to missing assumptions (this is usually referred to as unrealizability). To address this issue, new assumptions need to be added to complete the specification, a problem known as assumptions refinement. Since the space of candidate assumptions is intractably large, searching for the best solutions is inherently hard. In particular, new methods are needed to (i) increase the effectiveness of the search procedures, measured as the ratio between the number of solutions found and of refinements explored; and (ii) improve the results' quality, defined as the weakness of the solutions. In this thesis we propose a set of heuristics to meet these goals, and a methodology to assess and compare assumptions refinement methods based on quantitative metrics. The heuristics are in the form of algorithms to generate candidate refinements during the search, and quantitative measures to assess the quality of the candidates. We first discuss a heuristic method to generate assumptions that target the cause of unrealizability. This is done by selecting candidate refinement formulas based on Craig's interpolation. We provide a formal underpinning of the technique and evaluate it in terms of our new metric of effectiveness, as defined above, whose value is improved with respect to the state of the art. We demonstrate this on a set of popular benchmarks of embedded software. We then provide a formal, quantitative characterization of the permissiveness of environment assumptions in the form of a weakness measure. We prove that the partial order induced by this measure is consistent with the one induced by implication. The key advantage of this measure is that it allows for prioritizing candidate solutions, as we show experimentally. Lastly, we propose a notion of minimal refinements with respect to the observed counterstrategies. We demonstrate that exploring minimal refinements produces weaker solutions, and reduces the amount of computations needed to explore each refinement. However, this may come at the cost of reducing the effectiveness of the search. To counteract this effect, we propose a hybrid search approach in which both minimal and non-minimal refinements are explored.Open Acces

    Digital Transformation of the Design, Construction and Management Processes of the Built Environment

    Get PDF
    This open access book focuses on the development of methods, interoperable and integrated ICT tools, and survey techniques for optimal management of the building process. The construction sector is facing an increasing demand for major innovations in terms of digital dematerialization and technologies such as the Internet of Things, big data, advanced manufacturing, robotics, 3D printing, blockchain technologies and artificial intelligence. The demand for simplification and transparency in information management and for the rationalization and optimization of very fragmented and splintered processes is a key driver for digitization. The book describes the contribution of the ABC Department of the Polytechnic University of Milan (Politecnico di Milano) to R&D activities regarding methods and ICT tools for the interoperable management of the different phases of the building process, including design, construction, and management. Informative case studies complement the theoretical discussion. The book will be of interest to all stakeholders in the building process – owners, designers, constructors, and faculty managers – as well as the research sector

    Columbus State University Honors College: Senior Theses, Spring 2020

    Get PDF
    This is a collection of senior theses written by honors students at Columbus State University in Spring 2020.https://csuepress.columbusstate.edu/honors_theses/1001/thumbnail.jp

    A study on the Probabilistic Interval-based Event Calculus

    Get PDF
    Η Αναγνώριση Σύνθετων Γεγονότων είναι το πεδίο εκείνο της Τεχνητής Νοημοσύνης το οποίο αποσκοπεί στο σχεδιασμό και την κατασκευή συστημάτων τα οποία επεξεργάζονται γρήγορα μεγάλες και πιθανώς ετερογενείς ροές δεδομένων και τα οποία είναι σε θέση να αναγνωρίζουν εγκαίρως μη τετριμμένα και ενδιαφέροντα συμβάντα, βάσει κατάλληλων ορισμών που προέρχονται από ειδικούς. Σκοπός ενός τέτοιου συστήματος είναι η αυτοματοποιημένη εποπτεία πολύπλοκων και απαιτητικών καταστάσεων και η υποβοήθηση της λήψης αποφάσεων από τον άνθρωπο. Η αβεβαιότητα και ο θόρυβος είναι έννοιες που υπεισέρχονται φυσικά σε τέτοιες ροές δεδομένων και συνεπώς, καθίσταται απαραίτητη η χρήση της Θεωρίας Πιθανοτήτων για την αντιμετώπισή τους. Η πιθανοτική Αναγνώριση Σύνθετων Γεγονότων μπορεί να πραγματοποιηθεί σε επίπεδο χρονικής στιγμής ή σε επίπεδο χρονικού διαστήματος. Η παρούσα εργασία εστιάζει στον PIEC, έναν σύγχρονο αλγόριθμο για την Αναγνώριση Σύνθετων Γεγονότων με τη χρήση πιθανοτικών, μέγιστων διαστημάτων. Αρχικά παρουσιάζουμε τον αλγόριθμο και τον ερευνούμε ενδελεχώς. Μελετούμε την ορθότητά του μέσα από μια σειρά μαθηματικών αποδείξεων περί της ευρωστίας (soundness) και της πληρότητάς του (completeness). Κατόπιν, παραθέτουμε εκτενή πειραματική αποτίμηση του υπό μελέτη αλγορίθμου και σύγκρισή του με συστήματα πιθανοτικής Αναγνώρισης Γεγονότων σε επίπεδο χρονικών σημείων. Τα αποτελέσματά μας δείχνουν ότι ο PIEC επιδεικνύει σταθερά καλύτερη Ανάκληση (Recall), παρουσιάζοντας, ωστόσο κάποιες απώλειες σε Ακρίβεια (Precision) σε ορισμένες περιπτώσεις. Για τον λόγο αυτόν, εμβαθύνουμε και εξετάζουμε συγκεκριμένες περιπτώσεις στις οποίες ο PIEC αποδίδει καλύτερα, καθώς και άλλες στις οποίες παράγει αποτελέσματα υποδεέστερα των παραδοσιακών μεθόδων σημειακής αναγνώρισης, σε μια προσπάθεια να εντοπίσουμε και να διατυπώσουμε τις δυνατότητες αλλά και τις αδυναμίες του αλγορίθμου. Τέλος, θέτουμε τις γενικές κατευθυντήριες γραμμές για περαιτέρω έρευνα στο εν λόγω ζήτημα, τμήματα της οποίας βρίσκονται ήδη σε εξέλιξη.Complex Event Recognition is the subdivision of Artificial Intelligence that aims to design and construct systems that quickly process large and often heterogeneous streams of data and timely deduce – based on definitions set by domain experts – the occurrence of non-trivial and interesting incidents. The purpose of such systems is to provide useful insights into involved and demanding situations that would otherwise be difficult to monitor, and to assist decision making. Uncertainty and noise are inherent in such data streams and therefore, Probability Theory becomes necessary in order to deal with them. The probabilistic recognition of Complex Events can be done in a timepoint-based or an interval-based manner. This thesis focuses on PIEC, a state-of-the-art probabilistic, interval-based Complex Event Recognition algorithm. We present the algorithm and examine it in detail. We study its correctness through a series of mathematical proofs of its soundness and completeness. Afterwards, we provide thorough experimental evaluation and comparison to point-based probabilistic Event Recognition methods. Our evaluation shows that PIEC consistently displays better Recall measures, often at the expense of a generally worse Precision. We then focus on cases where PIEC performs significantly better and cases where it falls short, in an effort to detect and state its main strengths and weaknesses. We also set the general directions for further research on the topic, parts of which are already in progress

    Aplicaciones de la Programación Lógica Probabilística usando Problog2

    Get PDF
    La motivación principal por la que se ha elegido este tema para la realización del trabajo de fin de grado es la profundización en el ámbito de estudio de la Inteligencia Artificial, estudiada a lo largo de la carrera y que en la actualidad tiene una importancia considerable gracias a los continuos avances que surgen, incluso a día de hoy, que hacen que sea difícil establecer un límite de las capacidades que presenta. Centrándonos en la programación lógica probabilística, se han desarrollado muchas herramientas para su utilización, como son PHA, PRISM, SLP o MLN. Todas estas tienen la capacidad de generar probabilidades en sus fórmulas lógicas, sin embargo, todas cuentan con limitaciones y restricciones utilizadas para facilitar su computación que no permiten al algoritmo desarrollar su funcionalidad de la manera más precisa. Por otro lado, Problog2 tiene un uso muy simple que permite modelar y obtener respuestas rápidamente además de la capacidad de tratar conjuntos de datos muy grandes. Una de las principales razones por la que ocurre esto, a diferencia de los anteriores, es que Problog2 basa su inferencia en un mundo cerrado, es decir, se centra en representaciones de alto nivel simbólicos donde las variables siempre están definidas, lo que genera esta capacidad de ejecución rápida y eficaz. Además de esto, otra motivación importante para el estudio de Problog2 es que es uno de los lenguajes más modernos que tratan la programación lógica probabilística, y por ello cuentan con un blog en el que están presentes los desarrolladores del lenguaje [3] y en el que se discuten aspectos importantes a tratar para mejorar las capacidades del lenguaje, por lo que podemos decir que el lenguaje en sí está ‘vivo’ a día de hoy y se sigue mejorando y perfeccionando. Por esta razón compararemos este lenguaje con las capacidades que presentan otros para decidir si el uso de Problog2 para desarrollar nuestro modelo es el correcto. Nuestro modelo para ver las aplicaciones que tienen la programación lógica probabilística se basará en el reconocimiento de actividades, pero esto engloba muchas posibilidades. Nos centraremos en el ámbito del reconocimiento de actividades en vídeos de vigilancia. Las actividades que ocurren en estos vídeos están bastante acotadas por lo que no se tendrá que divagar en exceso para saber qué actividad se está realizando. Como hemos dicho, el modelo será una aproximación lo más real posible a lo que sería un sistema capaz de reconocer actividades en tiempo real. Nuestro modelo se basará en el reconocimiento de actividades a partir de datos ya procesados de bases de datos de vídeos de vigilancia, ya que el desarrollo de un sistema para el tratamiento de datos sería mucho más costoso y quedaría fuera del ámbito en el que se centra este trabajo, ya que está mas centrado en Deep Learning.Ingeniería Informátic

    Using SWISH to realise interactive web based tutorials for logic based languages

    Get PDF
    Programming environments have evolved from purely text based to using graphical user interfaces, and now we see a move towards web based interfaces, such as Jupyter. Web based interfaces allow for the creation of interactive documents that consist of text and programs, as well as their output. The output can be rendered using web technology as, e.g., text, tables, charts or graphs. This approach is particularly suitable for capturing data analysis workflows and creating interactive educational material. This article describes SWISH, a web front-end for Prolog that consists of a web server implemented in SWI-Prolog and a client web application written in JavaScript. SWISH provides a web server where multiple users can manipulate and run the same material, and it can be adapted to support Prolog extensions. In this paper we describe the architecture of SWISH, and describe two case studies of extensions of Prolog, namely Probabilistic Logic Programming (PLP) and Logic Production System (LPS), which have used SWISH to provide tutorial sites

    A Holmes and Doyle Bibliography, Volume 9: All Formats—Combined Alphabetical Listing

    Get PDF
    This bibliography is a work in progress. It attempts to update Ronald B. De Waal’s comprehensive bibliography, The Universal Sherlock Holmes, but does not claim to be exhaustive in content. New works are continually discovered and added to this bibliography. Readers and researchers are invited to suggest additional content. This volume contains all listings in all formats, arranged alphabetically by author or main entry. In other words, it combines the listings from Volume 1 (Monograph and Serial Titles), Volume 3 (Periodical Articles), and Volume 7 (Audio/Visual Materials) into a comprehensive bibliography. (There may be additional materials included in this list, e.g. duplicate items and items not yet fully edited.) As in the other volumes, coverage of this material begins around 1994, the final year covered by De Waal's bibliography, but may not yet be totally up-to-date (given the ongoing nature of this bibliography). It is hoped that other titles will be added at a later date. At present, this bibliography includes 12,594 items
    corecore