12 research outputs found

    Programming Languages and Systems

    Get PDF
    This open access book constitutes the proceedings of the 31st European Symposium on Programming, ESOP 2022, which was held during April 5-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 21 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. They deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems

    Attention-based machine perception for intelligent cyber-physical systems

    Get PDF
    Cyber-physical systems (CPS) fundamentally change the way of how information systems interact with the physical world. They integrate the sensing, computing, and communication capabilities on heterogeneous platforms and infrastructures. Efficient and effective perception of the environment lays the foundation of proper operations in other CPS components (e.g., planning and control). Recent advances in artificial intelligence (AI) have unprecedentedly changed the way of how cyber systems extract knowledge from the collected sensing data, and understand the physical surroundings. This novel data-to-knowledge transformation capability pushes a wide spectrum of recognition tasks (e.g., visual object detection, speech recognition, and sensor-based human activity recognition) to a higher level, and opens an new era of intelligent cyber-physical systems. However, the state-of-the-art neural perception models are typically computation-intensive and sensitive to data noises, which induce significant challenges when they are deployed on resources-limited embedded platforms. This dissertation works on optimizing both the efficiency and efficacy of deep-neural- network (DNN)-based machine perception in intelligent cyber-physical systems. We extensively exploit and apply the design philosophy of attention, originated from cognitive psychology field, from multiple perspectives of machine perception. It generally means al- locating different degrees of concentration to different perceived stimuli. Specifically, we address the following five research questions: First, can we run the computation-intensive neural perception models in real-time by only looking at (i.e., scheduling) the important parts of the perceived scenes, with the cueing from an external sensor? Second, can we eliminate the dependency on the external cueing and make the scheduling framework a self- cueing system? Third, how to distribute the workloads among cameras in a distributed (visual) perception system, where multiple cameras can observe the same parts of the environment? Fourth, how to optimize the achieved perception quality when sensing data from heterogeneous locations and sensor types are collected and utilized? Fifth, how to handle sensor failures in a distributed sensing system, when the deployed neural perception models are sensitive to missing data? We formulate the above problems, and introduce corresponding attention-based solutions for each, to construct the fundamental building blocks for envisioning an attention-based machine perception system in intelligent CPS with both efficiency and efficacy guarantees

    Calibración de un algoritmo de detección de anomalías marítimas basado en la fusión de datos satelitales

    Get PDF
    La fusión de diferentes fuentes de datos aporta una ayuda significativa en el proceso de toma de decisiones. El presente artículo describe el desarrollo de una plataforma que permite detectar anomalías marítimas por medio de la fusión de datos del Sistema de Información Automática (AIS) para seguimiento de buques y de imágenes satelitales de Radares de Apertura Sintética (SAR). Estas anomalías son presentadas al operador como un conjunto de detecciones que requieren ser monitoreadas para descubrir su naturaleza. El proceso de detección se lleva adelante primero identificando objetos dentro de las imágenes SAR a través de la aplicación de algoritmos CFAR, y luego correlacionando los objetos detectados con los datos reportados mediante el sistema AIS. En este trabajo reportamos las pruebas realizadas con diferentes configuraciones de los parámetros para los algoritmos de detección y asociación, analizamos la respuesta de la plataforma y reportamos la combinación de parámetros que reporta mejores resultados para las imágenes utilizadas. Este es un primer paso en nuestro objetivo futuro de desarrollar un sistema que ajuste los parámetros en forma dinámica dependiendo de las imágenes disponibles.XVI Workshop Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    Cache-Aware Real-Time Virtualization

    Get PDF
    Virtualization has been adopted in diverse computing environments, ranging from cloud computing to embedded systems. It enables the consolidation of multi-tenant legacy systems onto a multicore processor for Size, Weight, and Power (SWaP) benefits. In order to be adopted in timing-critical systems, virtualization must provide real-time guarantee for tasks and virtual machines (VMs). However, existing virtualization technologies cannot offer such timing guarantee. Tasks in VMs can interfere with each other through shared hardware components. CPU cache, in particular, is a major source of interference that is hard to analyze or manage. In this work, we focus on challenges of the impact of cache-related interferences on the real-time guarantee of virtualization systems. We propose the cache-aware real-time virtualization that provides both system techniques and theoretical analysis for tackling the challenges. We start with the challenge of the private cache overhead and propose the private cache-aware compositional analysis. To tackle the challenge of the shared cache interference, we start with non-virtualization systems and propose a shared cache-aware scheduler for operating systems to co-allocate both CPU and cache resources to tasks and develop the analysis. We then investigate virtualization systems and propose a dynamic cache management framework that hierarchically allocates shared cache to tasks. After that, we further investigate the resource allocation and analysis technique that considers not only cache resource but also CPU and memory bandwidth resources. Our solutions are applicable to commodity hardware and are essential steps to advance virtualization technology into timing-critical systems

    Deep learning for the internet of things

    Get PDF
    The proliferation of IoT devices heralds the emergence of intelligent embedded ecosystems that can collectively learn and that interact with humans in a human-like fashion. Recent advances in deep learning revolutionized related fields, such as vision and speech recognition, but the existing techniques remain far from efficient for resource-constrained embedded systems. This dissertation pioneers a broad research agenda on Deep Learning for IoT. By bridging state-of-the-art IoT and deep learning concepts, I hope to enable a future sensor-rich world that is smarter, more dependable, and more friendly, drawing on foundations borrowed from areas as diverse as sensing, embedded systems, machine learning, data mining, and real-time computing. Collectively, this dissertation addresses five research questions related to architecture, performance, predictability and implementation. First, are current deep neural networks fundamentally well-suited for learning from time-series data collected from physical processes, characteristic to IoT applications? If not, what architectural solutions and foundational building blocks are needed? Second, how to reduce the resource consumption of deep learning models such that they can be efficiently deployed on IoT devices or edge servers? Third, how to minimize the human cost of employing deep learning (namely, the cost of data labeling in IoT applications)? Fourth, how to predict uncertainty in deep learning outputs? Finally, how to design deep learning services that meet responsiveness and quality needed for IoT systems? This dissertation elaborates on these core problems and their emerging solutions to help lay a foundation for building IoT systems enriched with effective, efficient, and reliable deep learning models

    Adaptive streaming applications : analysis and implementation models

    Get PDF
    This thesis presents a highly automated design framework, called DaedalusRT, and several novel techniques. As the foundation of the DaedalusRT design framework, two types of dataflow Models-of-Computation (MoC) are used, one as timing analysis model and another one as the implementation model. The timing analysis model is used to formally reason about timing behavior of an application. In the context of DaedalusRT, the Mode-Aware Data Flow (MADF) MoC has been developed as the timing analysis model for adaptive streaming applications using different static modes. A novel mode transition protocol is devised to allow efficient reasoning of timing behavior during mode transitions. Based on the transition protocol, a hard real-time scheduling approach is proposed. On the other hand, the implementation model is used for efficient code generation of parallel computation, communication, and synchronization. In this thesis, the Parameterized Polyhedral Process Network (P3N) MoC has been developed to model adaptive streaming applications with parameter reconfiguration. An approach to verify the functional property of the P3N MoC has been devised. Finally, implementation of the P3N MoC on a MPSoC platform has shown that run-time performance penalty due to parameter reconfiguration is negligible.Technology Foundation STWComputer Systems, Imagery and Medi

    The Design, Analysis, & Application Of Multi-Modal Real-Time Embedded Systems

    Get PDF
    For many hand-held computing devices (e.g., smartphones), multiple operational modes are preferred because of their flexibility. In addition to their designated purposes, some of these devices provide a platform for different types of services, which include rendering of high-quality multimedia. Upon such devices, temporal isolation among co-executing applications is very important to ensure that each application receives an acceptable level of quality-of-service. In order to provide strong guarantees on services, multimedia applications and real-time control systems maintain timing constraints in the form of deadlines for recurring tasks. A flexible real-time multi-modal system will ideally provide system designers the option to change both resource-level modes and application-level modes. Existing schedulability analysis for a real-time multi-modal system (MMS) with software/hardware modes are computationally intractable. In addition, a fast schedulability analysis is desirable in a design-space exploration that determines the best parameters of a multi-modal system. The thesis of this dissertation is: The determination of resource parameters with guaranteed schedulability for real-time systems that may change computational requirements over time is expensive in terms of runtime. However, decoupling schedulability analysis from determining the minimum processing resource parameters of a real-time multi-modal system results in pseudo-polynomial complexity for the combined goals of determining MMS schedulability and optimal resource parameters. Effective schedulability analysis and optimized resource usages are essential for an MMS that may co-execute with other applications to reduce size and cost of an embedded system. Traditional real-time systems research has addressed the issue of schedulability under mode-changes and temporal isolation separately and independently. For instance, schedulability analysis of real-time multi-mode systems has commonly assumed that the system is executing upon a dedicated platform. On the other hand, research on temporal isolation in real-time scheduling has often assumed that the application and resource requirements of each subsystem are fixed during runtime. Only recently researchers have started to address the problem of guaranteeing hard deadlines of temporally-isolated subsystems for multi-modal systems. However, most of this research suffers two fundamental drawbacks: 1) full support for resource and application level mode-changes does not exist, and/or 2) determining schedulability for such systems has exponentialtime complexity. As a result, current literature cannot guarantee optimal resource usages for multi-modal systems. In this dissertation, we address the two fundamental drawbacks by providing a theoretical framework and associate tractable schedulability analysis for hard-real-time multi-modal subsystems. Then, by leveraging the schedulability analysis, we address the problem of optimizing a multi-modal system with respect to resource usages. To accelerate the schedulability analysis, we develop a parallel algorithm using message passing interface (MPI) to check the invariants of the schedulable real-time MMS. This parallel algorithm significantly improves the execution time for checking the schedulability (e.g., our parallel algorithm requires only approximately 45 minutes to analyze a 16-mode system upon 8 cores, whereas the analysis takes 9 hours when executed on a single core). However, even this reduction is still expensive for techniques such as design-space exploration (DSE) that repeatedly applies schedulability analysis to determine the optimal system resource parameters. Today\u27s massively parallel GPU platforms can be a cost-effective alternative for scaling the number of computer nodes and further reducing the computation time. An efficient GPU-based schedulability analysis can also be used online to reconfigure the system by re-evaluating schedulability if parameters change dynamically. In this dissertation, we also extend our parallel schedulability analysis algorithm for a GPU. Finally, we performed a case-study of radar-assisted cruise control system to show the usability of multi-modal system which consists of fixed priority non-preemptive tasks
    corecore