4,268 research outputs found

    RT-OpenStack: CPU Resource Management for Real-Time Cloud Computing

    Get PDF
    Clouds have become appealing platforms for not only general-purpose applications, but also real-time ones. However, current clouds cannot provide real-time performance to virtual machines (VMs). We observe the demand and the advantage of co-hosting real-time (RT) VMs with non-real-time (regular) VMs in a same cloud. RT VMs can benefit from the easily deployed, elastic resource provisioning provided by the cloud, while regular VMs effectively utilize remaining resources without affecting the performance of RT VMs through pro per resource management at both the cloud and the hypervisor levels. This paper presents RT-OpenStack, a cloud CPU resource management system for co-hosting real-time and regular VMs. RT-OpenStack entails three main contributions: (1) integration of a real-time hypervisor (RT-Xen) and a cloud management system (OpenStack) through a real-time resource interface; (2) a realtime VM scheduler to allow regular VMs to share hosts with RT VMs without interfering the real-time performance of RT VMs; and (3) a VM-to-host mapping strategy that provisions real-time performance to RT VMs while allowing effective resource sharing with regular VMs. Experimental results demonstrate that RTOpenStack can effectively improve the real-time performance of RT VMs while allowing regular VMs to fully utilize the remaining CPU resources

    Performance Evaluation of Components Using a Granularity-based Interface Between Real-Time Calculus and Timed Automata

    Get PDF
    To analyze complex and heterogeneous real-time embedded systems, recent works have proposed interface techniques between real-time calculus (RTC) and timed automata (TA), in order to take advantage of the strengths of each technique for analyzing various components. But the time to analyze a state-based component modeled by TA may be prohibitively high, due to the state space explosion problem. In this paper, we propose a framework of granularity-based interfacing to speed up the analysis of a TA modeled component. First, we abstract fine models to work with event streams at coarse granularity. We perform analysis of the component at multiple coarse granularities and then based on RTC theory, we derive lower and upper bounds on arrival patterns of the fine output streams using the causality closure algorithm. Our framework can help to achieve tradeoffs between precision and analysis time.Comment: QAPL 201

    The earlier the better: a theory of timed actor interfaces

    Get PDF
    Programming embedded and cyber-physical systems requires attention not only to functional behavior and correctness, but also to non-functional aspects and specifically timing and performance. A structured, compositional, model-based approach based on stepwise refinement and abstraction techniques can support the development process, increase its quality and reduce development time through automation of synthesis, analysis or verification. Toward this, we introduce a theory of timed actors whose notion of refinement is based on the principle of worst-case design that permeates the world of performance-critical systems. This is in contrast with the classical behavioral and functional refinements based on restricting sets of behaviors. Our refinement allows time-deterministic abstractions to be made of time-non-deterministic systems, improving efficiency and reducing complexity of formal analysis. We show how our theory relates to, and can be used to reconcile existing time and performance models and their established theories

    Cache-Aware Real-Time Virtualization

    Get PDF
    Virtualization has been adopted in diverse computing environments, ranging from cloud computing to embedded systems. It enables the consolidation of multi-tenant legacy systems onto a multicore processor for Size, Weight, and Power (SWaP) benefits. In order to be adopted in timing-critical systems, virtualization must provide real-time guarantee for tasks and virtual machines (VMs). However, existing virtualization technologies cannot offer such timing guarantee. Tasks in VMs can interfere with each other through shared hardware components. CPU cache, in particular, is a major source of interference that is hard to analyze or manage. In this work, we focus on challenges of the impact of cache-related interferences on the real-time guarantee of virtualization systems. We propose the cache-aware real-time virtualization that provides both system techniques and theoretical analysis for tackling the challenges. We start with the challenge of the private cache overhead and propose the private cache-aware compositional analysis. To tackle the challenge of the shared cache interference, we start with non-virtualization systems and propose a shared cache-aware scheduler for operating systems to co-allocate both CPU and cache resources to tasks and develop the analysis. We then investigate virtualization systems and propose a dynamic cache management framework that hierarchically allocates shared cache to tasks. After that, we further investigate the resource allocation and analysis technique that considers not only cache resource but also CPU and memory bandwidth resources. Our solutions are applicable to commodity hardware and are essential steps to advance virtualization technology into timing-critical systems

    Cache-Aware Compositional Analysis of Real-Time Multicore Virtualization Platforms

    Get PDF
    Multicore processors are becoming ubiquitous, and it is becoming increasingly common to run multiple real-time systems on a shared multicore platform. While this trend helps to reduce cost and to increase performance, it also makes it more challenging to achieve timing guarantees and functional isolation. One approach to achieving functional isolation is to use virtualization. However, virtualization also introduces many challenges to the multicore timing analysis; for instance, the overhead due to cache misses becomes harder to predict, since it depends not only on the direct interference between tasks but also on the indirect interference between virtual processors and the tasks executing on them. In this paper, we present a cache-aware compositional analysis technique that can be used to ensure timing guarantees of components scheduled on a multicore virtualization platform. Our technique improves on previous multicore compositional analyses by accounting for the cache-related overhead in the components’ interfaces, and it addresses the new virtualization-specific challenges in the overhead analysis. To demonstrate the utility of our technique, we report results from an extensive evaluation based on randomly generated workloads
    • …
    corecore