247 research outputs found

    Improved CRPD analysis and a secure scheduler against information leakage in real-time systems

    Get PDF
    Real-time systems are widely applied to the time-critical fields. In order to guarantee that all tasks can be completed on time, predictability becomes a necessary factor when designing a real-time system. Due to more and more requirements about the performance in the real-time embedded system, the cache memory is introduced to the real-time embedded systems. However, the cache behavior is difficult to predict since the data will be loaded either on the cache or the memory. In order to taking the unexpected overhead, execution time are often enlarged by a certain (huge) factor. However, this will cause a waste of computation resource. Hence, in this thesis, we first integrate the cache-related preemption delay to the previous global earliest deadline first schedulability analysis in the direct-mapped cache. Moreover, several analyses for tighter G-EDF schedulability tests are conducted based on the refined estimation of the maximal number of preemptions. The experimental study is conducted to demonstrate the performance of the proposed methods. Furthermore, Under the classic scheduling mechanisms, the execution patterns of tasks on such a system can be easily derived. Therefore, in the second part of the thesis, a novel scheduler, roulette wheel scheduler (RWS), is proposed to randomize the task execution pattern. Unlike traditional schedulers, RWS assigns probabilities to each task at predefined scheduling points, and the choice for execution is randomized, such that the execution pattern is no longer fixed. We apply the concept of schedule entropy to measure the amount of uncertainty introduced by any randomized scheduler, which reflects the unlikelihood of for such attacks to success. Comparing to existing randomized scheduler that gives all eligible tasks equal likelihood at a given time point, the proposed method adjusted such values so that the entropy can be greatly increased --Abstract, page iii

    Exact Speedup Factors and Sub-Optimality for Non-Preemptive Scheduling

    Get PDF
    Fixed priority scheduling is used in many real-time systems; however, both preemptive and non-preemptive variants (FP-P and FP-NP) are known to be sub-optimal when compared to an optimal uniprocessor scheduling algorithm such as preemptive earliest deadline first (EDF-P). In this paper, we investigate the sub-optimality of fixed priority non-preemptive scheduling. Specifically, we derive the exact processor speed-up factor required to guarantee the feasibility under FP-NP (i.e. schedulability assuming an optimal priority assignment) of any task set that is feasible under EDF-P. As a consequence of this work, we also derive a lower bound on the sub-optimality of non-preemptive EDF (EDF-NP). As this lower bound matches a recently published upper bound for the same quantity, it closes the exact sub-optimality for EDF-NP. It is known that neither preemptive, nor non-preemptive fixed priority scheduling dominates the other, in other words, there are task sets that are feasible on a processor of unit speed under FP-P that are not feasible under FP-NP and vice-versa. Hence comparing these two algorithms, there are non-trivial speedup factors in both directions. We derive the exact speed-up factor required to guarantee the FP-NP feasibility of any FP-P feasible task set. Further, we derive the exact speed-up factor required to guarantee FP-P feasibility of any constrained-deadline FP-NP feasible task set

    Memory-processor co-scheduling in fixed priority systems

    Get PDF
    A major obstacle towards the adoption of multi-core platforms for real-time systems is given by the difficulties in characterizing the interference due to memory contention. The simple fact that multiple cores may simultaneously access shared memory and communication resources introduces a significant pessimism in the timing and schedulability analysis. To counter this problem, predictable execution models have been proposed splitting task executions into two consecutive phases: a memory phase in which the required instruction and data are pre-fetched to local memory (M-phase), and an execution phase in which the task is executed with no memory contention (C-phase). Decoupling memory and execution phases not only simplifies the timing analysis, but it also allows a more efficient (and predictable) pipelining of memory and execution phases through proper co-scheduling algorithms. In this paper, we take a further step towards the design of smart co-scheduling algorithms for sporadic real-time tasks complying with the M/C (memory-computation) model. We provide a theoretical framework that aims at tightly characterizing the schedulability improvement obtainable with the adopted M/C task model on a single-core systems. We identify a tight critical instant for M/C tasks scheduled with fixed priority, providing an exact response-time analysis with pseudo-polynomial complexity. We show in our experiments that a significant schedulability improvement may be obtained with respect to classic execution models, placing an important building block towards the design of more efficient partitioned multi-core systems

    Schedulability, Response Time Analysis and New Models of P-FRP Systems

    Get PDF
    Functional Reactive Programming (FRP) is a declarative approach for modeling and building reactive systems. FRP has been shown to be an expressive formalism for building applications of computer graphics, computer vision, robotics, etc. Priority-based FRP (P-FRP) is a formalism that allows preemption of executing programs and guarantees real-time response. Since functional programs cannot maintain state and mutable data, changes made by programs that are preempted have to be rolled back. Hence in P-FRP, a higher priority task can preempt the execution of a lower priority task, but the preempted lower priority task will have to restart after the higher priority task has completed execution. This execution paradigm is called Abort-and-Restart (AR). Current real-time research is focused on preemptive of non-preemptive models of execution and several state-of-the-art methods have been developed to analyze the real-time guarantees of these models. Unfortunately, due to its transactional nature where preempted tasks are aborted and have to restart, the execution semantics of P-FRP does not fit into the standard definitions of preemptive or non-preemptive execution, and the research on the standard preemptive and non-preemptive may not applicable for the P-FRP AR model. Out of many research areas that P-FRP may demands, we focus on task scheduling which includes task and system modeling, priority assignment, schedulability analysis, response time analysis, improved P-FRP AR models, algorithms and corresponding software. In this work, we review existing results on P-FRP task scheduling and then present our research contributions: (1) a tighter feasibility test interval regarding the task release offsets as well as a linked list based algorithm and implementation for scheduling simulation; (2) P-FRP with software transactional memory-lazy conflict detection (STM-LCD); (3) a non-work-conserving scheduling model called Deferred Start; (4) a multi-mode P-FRP task model; (5) SimSo-PFRP, the P-FRP extension of SimSo - a SimPy-based, highly extensible and user friendly task generator and task scheduling simulator.Computer Science, Department o

    Limited Preemptive Scheduling for Real-Time Systems: a Survey

    Get PDF
    The question whether preemptive algorithms are better than nonpreemptive ones for scheduling a set of real-time tasks has been debated for a long time in the research community. In fact, especially under fixed priority systems, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. Recently, limited preemption models have been proposed as a viable alternative between the two extreme cases of fully preemptive and nonpreemptive scheduling. This paper presents a survey of the existing approaches for reducing preemptions and compares them under different metrics, providing both qualitative and quantitative performance evaluations

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system

    Integrating Pragmatic Constraints and Behaviors into Real-Time Scheduling Theory

    Get PDF
    Scheduling theory has been studied and developed extensively in prior research. In some existing scheduling theory results, the focus is primarily on demonstrating interesting theoretical properties, thus these results are not always cognizant of pragmatic constraints. We seek to determine how existing scheduling theory can be improved with respect to pragmatic constraints and behaviors. The goal of this research is to study and design scheduling algorithms for scheduling real-time workload under constraints and behaviors found in real-time systems. Based on our study we derive a scheduling algorithm for partitioning a collection of real-time tasks in a manner that is cognizant of multiple resource constraints. We apply the above scheduling algorithm for partitioning mixed-criticality tasks. In real-time systems the scheduling algorithm must schedule workload such that all timing constraints are met; we verify this using schedulability tests. We describe schedulability tests for each of the scheduling algorithms that we derive. We also propose a new schedulability test for an existing scheduling algorithm that is commonly used in real-time systems research for scheduling tasks with limited-preemptivity. Finally, we propose a scheduling algorithm and schedulability test for scheduling real-time workload on processors that allow dynamic overclocking.Doctor of Philosoph

    On the effectiveness of cache partitioning in hard real-time systems

    Get PDF
    In hard real-time systems, cache partitioning is often suggested as a means of increasing the predictability of caches in pre-emptively scheduled systems: when a task is assigned its own cache partition, inter-task cache eviction is avoided, and timing verification is reduced to the standard worst-case execution time analysis used in non-pre-emptive systems. The downside of cache partitioning is the potential increase in execution times. In this paper, we evaluate cache partitioning for hard real-time systems in terms of overall schedulability. To this end, we examine the sensitivity of (i) task execution times and (ii) pre-emption costs to the size of the cache partition allocated and present a cache partitioning algorithm that is optimal with respect to taskset schedulability. We also devise an alternative algorithm which primarily optimises schedulability but also minimises processor utilization. We evaluate the performance of cache partitioning compared to state-of-the-art pre-emption cost analysis based on benchmark code and on a large number of synthetic tasksets with both fixed priority and EDF scheduling. This allows us to derive general conclusions about the usability of cache partitioning and identify taskset and system parameters that influence the relative effectiveness of cache partitioning. We also examine the improvement in processor utilization obtained using an alternative cache partitioning algorithm, and the tradeoff in terms of increased analysis time
    • …
    corecore