68 research outputs found

    VCDC: The Virtualized Complicated Device Controller

    Get PDF
    I/O virtualization enables time and space multiplexing of I/O devices, by mapping multiple logical I/O devices upon a smaller number of physical devices. However, due to the existence of additional virtualization layers, requesting an I/O from a guest virtual machine requires complicated sequences of operations. This leads to I/O performance losses, and makes precise timing of I/O operations unpredictable. This paper proposes a hardware I/O virtualization system, termed the Virtualized Complicated Device Controller (VCDC). This I/O system allows user applications to access and operate I/O devices directly from guest VMs, and bypasses the guest OS, the Virtual Machine Monitor (VMM) and low layer I/O drivers. We show that the VCDC efficiently reduces the software overhead and enhances the I/O performance and timing predictability. Furthermore, VCDC also exhibits good scalability that can handle I/O requests from variable number of CPUs in a system

    Traces as a Solution to Pessimism and Modeling Costs in WCET Analysis

    Get PDF
    WCET analysis models for superscalar out-of-order CPUs generally need to be pessimistic in order to account for a wide range of possible dynamic behavior. CPU hardware modifications could be used to constrain operations to known execution paths called traces, permitting exploitation of instruction level parallelism with guaranteed timing. Previous implementations of traces have used microcode to constrain operations, but other possibilities exist. A new implementation strategy (virtual traces) is introduced here. In this paper the benefits and costs of traces are discussed. Advantages of traces include a reduction in pessimism in WCET analysis, with the need to accurately model CPU internals removed. Disadvantages of traces include a reduction of peak throughput of the CPU, a need for deterministic memory and a potential increase in the complexity of WCET models

    Improving efficiency of persistent storage access in embedded Linux

    Get PDF
    Real-time embedded systems increasingly need to process and store large volumes of persistent data, requiring fast, timely and predictable storage. Traditional methods of accessing storage using general-purpose operating system-based file systems do not provide the performance and timing predictability needed. This paper firstly examines the speed and consistency of SSD operations in an embedded Linux system, identifying areas where inefficiencies in the storage stack cause issues for performance and predictability. Secondly, the CharIO storage device driver is proposed to bypass Linux file systems and the kernel block layer, in order to increase performance, and provide improved timing predictability

    Flexible scheduling of hard real-time systems.

    Get PDF

    Implementing time-predictable load and store operations

    Full text link
    Scratchpads have been widely proposed as an alternative to caches for embedded systems. Advantages of scratchpads in-clude reduced energy consumption in comparison to a cache and access latencies that are independent of the preceding memory access pattern. The latter property makes memory accesses time-predictable, which is useful for hard real-time tasks as the worst-case execution time (WCET) must be safely estimated in order to check that the system will meet timing requirements. However, data must be explicitly moved between scratch-pad and external memory as a task executes in order to make best use of the limited scratchpad space. When dy-namic data is moved, issues such as pointer aliasing and pointer invalidation become problematic. Previous work has proposed solutions that are not suitable for hard real-time tasks because memory accesses are not time-predictable. This paper proposes the scratchpad memory management unit (SMMU) as an enhancement to scratchpad technol-ogy. The SMMU implements an alternative solution to the pointer aliasing and pointer invalidation problems which (1) does not require whole-program pointer analysis and (2) makes every memory access operation time-predictable. This allows WCET analysis to be applied to hard-real time tasks which use a scratchpad and dynamic data, but results are also applicable in the wider context of minimizing en-ergy consumption or average execution time. Experiments using C software show that the combination of an SMMU and scratchpad compares favorably with the best and worst case performance of a conventional data cache

    BlueIO: A Scalable Real-Time Hardware I/O Virtualization System for Many-core Embedded Systems

    Get PDF
    In safety-critical systems, time predictability is vital. This extends to I/O operations which require predictability, timing-accuracy, parallel access, scalability, and isolation. Currently, existing approaches cannot achieve all these requirements at the same time. In this paper, we propose a framework of hardware framework for real-time I/O virtualization termed BlueIO to meet all these requirements simultaneously. BlueIO integrates the functionalities of I/O virtualization, low layer I/O drivers and a clock cycle level timing-accurate I/O controller (using the GPIOCP. BlueIO provides this functionality in the hardware layer, supporting abstract virtualized access to I/O from the software domain. The hardware implementation includes I/O virtualization and I/O drivers, provides isolation and parallel (concurrent) access to I/O operations and improves I/O performance. Furthermore, the approach includes the previously proposed GPIOCP to guarantee that I/O operations will occur at a specific clock cycle (i.e. be timing-accurate and predictable). In this paper, we present a hardware consumption analysis of BlueIO, in order to show that it linearly scales with the number of CPUs and I/O devices, which is evidenced by our implementation in VLSI and FPGA. We also describe the design and implementation of BlueIO, and demonstrate how a BlueIO-based system can be exploited to meet real-time requirements with significant improvements in I/O performance and a low running cost on different OSs

    MCS-IOV : Real-time I/o virtualization for mixed-criticality systems

    Get PDF
    In mixed-criticality systems, timely handling of I/O is a key for the system being successfully implemented and functioning appropriately. The criticality levels of functions and sometimes the whole system are often dependent on the state of the I/O. An I/O system for a MCS must provide simultaneously isolation/separation, performance/efficiency and timing-predictability, as well as being able to manage I/O resource in an adaptive manner to facilitate efficient yet safe resource sharing among components of different criticality levels. Existing approaches cannot achieve all of these requirements simultaneously. This paper presents a MCS I/O management framework, termed MCS-IOV. MCS-IOV is based on hardware assisted virtualisation, which provides temporal and spatial isolation and prohibits fault propagation with small extra overhead in performance. MCS-IOV extends a real-time I/O virtualisation system, by supporting the concept of mixed criticalities and customised interfaces for schedulers, which offers good timing-preditability. MCS-IOV supports I/O driven criticality mode switch (the mode switch can be triggered by detection of unexpected I/O behaviors, e.g., a higher I/O utilization than expected) and timely I/O resource reconfiguration up on that. Finally, We evaluated and demonstrate MCS-IOV in different aspects

    Architecting Time-Critical Big-Data Systems

    Get PDF
    Current infrastructures for developing big-data applications are able to process –via big-data analytics- huge amounts of data, using clusters of machines that collaborate to perform parallel computations. However, current infrastructures were not designed to work with the requirements of time-critical applications; they are more focused on general-purpose applications rather than time-critical ones. Addressing this issue from the perspective of the real-time systems community, this paper considers time-critical big-data. It deals with the definition of a time-critical big-data system from the point of view of requirements, analyzing the specific characteristics of some popular big-data applications. This analysis is complemented by the challenges stemmed from the infrastructures that support the applications, proposing an architecture and offering initial performance patterns that connect application costs with infrastructure performance
    • …
    corecore