61 research outputs found

    Limited Preemptive Scheduling for Real-Time Systems: a Survey

    Get PDF
    The question whether preemptive algorithms are better than nonpreemptive ones for scheduling a set of real-time tasks has been debated for a long time in the research community. In fact, especially under fixed priority systems, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. Recently, limited preemption models have been proposed as a viable alternative between the two extreme cases of fully preemptive and nonpreemptive scheduling. This paper presents a survey of the existing approaches for reducing preemptions and compares them under different metrics, providing both qualitative and quantitative performance evaluations

    A MDE-based optimisation process for Real-Time systems

    Get PDF
    The design and implementation of Real-Time Embedded Systems is now heavily relying on Model-Driven Engineering (MDE) as a central place to define and then analyze or implement a system. MDE toolchains are taking a key role as to gather most of functional and not functional properties in a central framework, and then exploit this information. Such toolchain is based on both 1) a modeling notation, and 2) companion tools to transform or analyse models. In this paper, we present a MDE-based process for system optimisation based on an architectural description. We first define a generic evaluation pipeline, define a library of elementary transformations and then shows how to use it through Domain-Specific Language to evaluate and then transform models. We illustrate this process on an AADL case study modeling a Generic Avionics Platform

    A Survey of Fault-Tolerance Techniques for Embedded Systems from the Perspective of Power, Energy, and Thermal Issues

    Get PDF
    The relentless technology scaling has provided a significant increase in processor performance, but on the other hand, it has led to adverse impacts on system reliability. In particular, technology scaling increases the processor susceptibility to radiation-induced transient faults. Moreover, technology scaling with the discontinuation of Dennard scaling increases the power densities, thereby temperatures, on the chip. High temperature, in turn, accelerates transistor aging mechanisms, which may ultimately lead to permanent faults on the chip. To assure a reliable system operation, despite these potential reliability concerns, fault-tolerance techniques have emerged. Specifically, fault-tolerance techniques employ some kind of redundancies to satisfy specific reliability requirements. However, the integration of fault-tolerance techniques into real-time embedded systems complicates preserving timing constraints. As a remedy, many task mapping/scheduling policies have been proposed to consider the integration of fault-tolerance techniques and enforce both timing and reliability guarantees for real-time embedded systems. More advanced techniques aim additionally at minimizing power and energy while at the same time satisfying timing and reliability constraints. Recently, some scheduling techniques have started to tackle a new challenge, which is the temperature increase induced by employing fault-tolerance techniques. These emerging techniques aim at satisfying temperature constraints besides timing and reliability constraints. This paper provides an in-depth survey of the emerging research efforts that exploit fault-tolerance techniques while considering timing, power/energy, and temperature from the real-time embedded systems’ design perspective. In particular, the task mapping/scheduling policies for fault-tolerance real-time embedded systems are reviewed and classified according to their considered goals and constraints. Moreover, the employed fault-tolerance techniques, application models, and hardware models are considered as additional dimensions of the presented classification. Lastly, this survey gives deep insights into the main achievements and shortcomings of the existing approaches and highlights the most promising ones

    Reducing the WCET and analysis time of systems with simple lockable instruction caches

    Get PDF
    One of the key challenges in real-time systems is the analysis of the memory hierarchy. Many Worst-Case Execution Time (WCET) analysis methods supporting an instruction cache are based on iterative or convergence algorithms, which are rather slow. Our goal in this paper is to reduce the WCET analysis time on systems with a simple lockable instruction cache, focusing on the Lock-MS method. First, we propose an algorithm to obtain a structure-based representation of the Control Flow Graph (CFG). It organizes the whole WCET problem as nested subproblems, which takes advantage of common branch-and-bound algorithms of Integer Linear Programming (ILP) solvers. Second, we add support for multiple locking points per task, each one with specific cache contents, instead of a given locked content for the whole task execution. Locking points are set heuristically before outer loops. Such simple heuristics adds no complexity, and reduces the WCET by taking profit of the temporal reuse found in loops. Since loops can be processed as isolated regions, the optimal contents to lock into cache for each region can be obtained, and the WCET analysis time is further reduced. With these two improvements, our WCET analysis is around 10 times faster than other approaches. Also, our results show that the WCET is reduced, and the hit ratio achieved for the lockable instruction cache is similar to that of a real execution with an LRU instruction cache. Finally, we analyze the WCET sensitivity to compiler optimization, showing for each benchmark the right choices and pointing out that O0 is always the worst option

    Design and Implementation of a Time Predictable Processor: Evaluation With a Space Case Study

    Get PDF
    Embedded real-time systems like those found in automotive, rail and aerospace, steadily require higher levels of guaranteed computing performance (and hence time predictability) motivated by the increasing number of functionalities provided by software. However, high-performance processor design is driven by the average-performance needs of mainstream market. To make things worse, changing those designs is hard since the embedded real-time market is comparatively a small market. A path to address this mismatch is designing low-complexity hardware features that favor time predictability and can be enabled/disabled not to affect average performance when performance guarantees are not required. In this line, we present the lessons learned designing and implementing LEOPARD, a four-core processor facilitating measurement-based timing analysis (widely used in most domains). LEOPARD has been designed adding low-overhead hardware mechanisms to a LEON3 processor baseline that allow capturing the impact of jittery resources (i.e. with variable latency) in the measurements performed at analysis time. In particular, at core level we handle the jitter of caches, TLBs and variable-latency floating point units; and at the chip level, we deal with contention so that time-composable timing guarantees can be obtained. The result of our applied study with a Space application shows how per-resource jitter is controlled facilitating the computation of high-quality WCET estimates

    System-level scheduling of mixed-criticality traffics in avionics networks

    Get PDF
    ABSTRACT: System-level mixed-criticality design aims at reducing production cost and enhancing resource efficiency. This paper studies the technology of integrating mixed-criticality avionics traffics for Avionics Full-Duplex Switched Ethernet (AFDX) network, which can transmit both critical and non-critical traffics. These two traffics have different QoS requirements, such as low latency for critical traffics and high bandwidth for non-critical traffics. We use system-level compositional scheduling to integrate mixed-criticality traffics into one network to enhance the scalability of AFDX network. In the architecture of the proposed compositional scheduling, critical traffics are scheduled by bandwidth allocation gap-based scheduler, and non-critical traffics by Round Robin manner. To estimate the delay bound meeting requirements of applications, end-to-end delay for both critical and non-critical traffics are analyzed by using network calculus. Finally, a true time-based simulation of AFDX networks is conducted to verify the effectiveness of the proposed approach

    In the Direction of Service Guarantees for Virtualized Network Functions

    Get PDF
    The trend of consolidating network functions from specialized hardware to software running on virtualization servers brings significant advantages for reducing costs and simplifying service deployment. However, virtualization techniques have significant limitations when it comes to networking as there is no support for guaranteeing that network functions meet their service requirements. In this paper, we present a design for providing service guarantees to virtualized network functions based on rate control. The design is a combination of rate regulation through token bucket filters and the regular scheduling mechanisms in operating systems. It has the attractive property that traffic profiles are maintained throughout a series of network functions, which makes it well suited for service function chaining. We discuss implementation alternatives for the design and demonstrate how it can be implemented on two virtualization platforms: LXC containers and the KVM hypervisor. To evaluate the design, we conduct experiments where we measure throughput and latency using IP forwarders (routers) as examples of virtual network functions. Two significant factors for performance are investigated: the design of token buckets and the packet clustering effect that comes from scheduling. Finally, we demonstrate how performance guarantees are achieved for rate-controlled virtual routers under different scenarios.publishedVersio

    Optimal Selection of Preemption Points to Minimize Preemption Overhead

    Get PDF
    A central issue for verifying the schedulability of hard real-time systems is the correct evaluation of task execution times. These values are significantly influenced by the preemption overhead, which mainly includes the cache related delays and the context switch times introduced by each preemption. Since such an overhead significantly depends on the particular point in the code where preemption takes place, this paper proposes a method for placing suitable preemption points in each task in order to maximize the chances of finding a schedulable solution. In a previous work, we presented a method for the optimal selection of preemption points under the restrictive assumption of a fixed preemption cost, identical for each preemption point. In this paper, we remove such an assumption, exploring a more realistic and complex scenario where the preemption cost varies throughout the task code. Instead of modeling the problem with an integer programming formulation, with exponential worst-case complexity, we derive an optimal algorithm that has a linear time and space complexity. This somewhat surprising result allows selecting the best preemption points even in complex scenarios with a large number of potential preemption locations. Experimental results are also presented to show the effectiveness of the proposed approach in increasing the system schedulability

    Design and analysis of target-sensitive real-time systems

    Get PDF
    A significant number of real-time control applications include computational activities where the results have to be delivered at precise instants, rather than within a deadline. The performance of such systems significantly degrades if outputs are generated before or after the desired target time. This work presents a general methodology that can be used to design and analyze target-sensitive applications in which the timing parameters of the computational activities are tightly coupled with the physical characteristics of the system to be controlled. For the sake of clarity, the proposed methodology is illustrated through a sample case study used to show how to derive and verify real-time constraints from the mission requirements. Software implementation issues necessary to map the computational activities into tasks running on a real-time kernel are also discussed to identify the kernel mechanisms necessary to enforce timing constraints and analyze the feasibility of the application. A set of experiments are finally presented with the purpose of validating the proposed methodology
    corecore