248 research outputs found

    Implementation of Timing-Event Affinities in Ada/Linux

    Full text link
    © Sáez Barona, S.; Real Sáez, J. V.; Crespo, A. | ACM, 2015. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Adda Letters, http://dx.doi.org/10.1145/2870544.2870546Ada 2012 has introduced mechanisms for exploiting multiprocessor platforms at the application level. These include task affinity control and definition of dispatching domains. However, there are other executable entities defined in the language for which there is no such support to affinity control: event handlers. With event handlers we mean both timing-event and interrupt handlers. This paper discusses the consequences of this lack of functionality and explores implementation issues related with this ability. We propose a working implementation for affinity of timing-event handlers on top of Linux.Sáez Barona, S.; Real Sáez, JV.; Crespo Lorente, A. (2015). Implementation of Timing-Event Affinities in Ada/Linux. Ada Letters. 35(1):80-92. doi:10.1145/2870544.2870546S8092351M. Aldea, A. Burns, M. Gutirrez, and M. González Harbour. Incorporating the deadline floor protocol in Ada. ACM SIGAda Ada Letters -- Proc. of IRTAW 16, XXXIII(2):49--58, 2013.B.B. Brandenburg and J.H. Anderson. Feather-trace: A light-weight event tracing toolkit. In Proc. OSPERT, 2007.A. Burns. A Deadline-Floor Inheritance Protocol for EDF Scheduled Real-Time Systems with Resource Sharing. Technical Report YCS-2012-476, Department of Computer Science, University of York, UK, 2012.A. Burns, M. Gutierrez, M. Aldea, and M. González Harbour. A deadlinefloor inheritance protocol for EDF scheduled embedded real-time systems with resource sharing. IEEE Transactions on Computers, (Online PrePrints), 2014.H. Franke, R. Russell, and M. Kirkwood. Fuss, futexes and furwocks: Fast userlevel locking in linux. In AUUG Conference Proceedings, pages 85--97. AUUG, Inc., 2002.C. Li, C. Ding, and K. Shen. Quantifying the cost of context switch. In Proceedings of the 2007 Workshop on Experimental Computer Science, ExpCS '07. ACM, 2007.R. Spliet, M. Vanga, B.B. Brandenburg, and S. Dziadek. Fast on average, predictable in the worst case: Exploring real-time futexes in litmus. In Proc. IEEE Real-Time Systems Symposium, pages 96--105. IEEE, 2014

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system

    On the Convergence of the Holistic Analysis for EDF Distributed Systems

    Get PDF
    Dynamic scheduling techniques, and EDF (Earliest Deadline First) in particular, have demonstrated their ability to increase the schedulability of real time systems compared to fixed-priority scheduling. In distributed systems, the scheduling policies of the processing nodes tend to be the same as in stand-alone systems and, although few EDF networks exist, it is foreseen that dynamic scheduling will gradually develop into real-time networks. There are some response time analysis techniques for EDF scheduled distributed systems, mostly derived from the holistic analysis developed by Spuri. The convergence of the holistic analysis in context of EDF distributed systems with shared resources had not been studied until now. There is a circular dependency between tasks’ release jitter values, response times and preemption level ceilings of shared resources. In this paper we present an extension of Spuri’s algorithm and we demonstrate that its iterative formulas are non-decreasing, even in the presence of shared resources. This result enables us to assert that the new algorithm converges towards a solution for the response times of the tasks and messages in a distributed system

    Mixed Criticality Systems with Weakly-Hard Constraints

    Get PDF
    Mixed criticality systems contain components of at least two criticality levels which execute on a common hardware platform in order to more efficiently utilise re- sources. Due to multiple worst-case execution time estimates, current adaptive mixed criticality scheduling policies assume the notion of a low criticality mode where by a taskset executes under a set of more realistic temporal assumptions and a high criticality mode, in which all low criticality tasks in the taskset are descheduled, to ensure that high criticality tasks can meet more conservative timing constraints derived from certification approved methods. This issue is known as the service abrupt problem and comprises the topic of this work. The principles of real-time schedulability analysis are first reviewed, providing relevant background and theory on which mixed criticality systems analysis is based. The current state-of-the-art of mixed criticality systems scheduling policies on uni-processor systems are then discussed along with the major challenges facing the adoption of such approaches in practice. To address the service abrupt issue this work presents a new policy, Adaptive Mixed Criticality - Weakly Hard which provides a guaranteed minimum quality of service for low criticality tasks in the event of a criticality mode change. Two offline response time based schedulability tests are derived for this model and dominance relationship proved. Empirical evaluations are then used to assess the relative performance against previously published policies and their schedulability tests, where the new policy is shown to offer a scalable performance trade-off between existing fixed priority preemptive and adaptive mixed criticality policies. The work concludes with possible directions for future research

    Simulation of multi-core scheduling in real-time embedded systems

    Get PDF
    In real-time systems the correctness of a system depends not only on the logical correctness of the running program but also on the time at which the logically correct output is produced. Therefore, in such a system it is necessary to provide the right computational result within a strict time limit called the deadline of a task. In hard real-time systems the deadline of a task must not be missed, whereas in soft real-time systems it can be missed occasionally. In recent years the trend has been observed which shows a shift from single-core to multi-core architectures for real-time systems. The main point of this thesis is to study a few promising multi-core scheduling algorithms both from the partitioned and the global approaches to multi-core scheduling and implement some of them into the existing simulation software. To represent the partitioned approach, Partitioned EDF has been implemented with the capability of specification of a resource access protocol for each core. The partitioned approach requires heuristics for task partitioning, the problem known to be NP-hard in the strong sense. For this reason, the implementation of Partitioned EDF requires manual task partitioning of the system in order to be able to utilize the maximum processing power. Proportionate Fair abbreviated as Pfair is the only known optimal way to schedule a set of periodic tasks on multi-core systems that falls into the global approach of multi-core scheduling. Therefore, to represent the global approach, several variants of Pfair scheduling algorithm have been selected for the implementation into the existing system. To be truly useful in practice, a real-time multi-core scheduling algorithm should support access to shared resources using some resource access protocol. For this reason, the Flexible Multiprocessor Locking Protocol abbreviated as FMLP has been studied and implemented to simulate shared resource access on multi-core systems. This resource access protocol can be used by scheduling algorithms representing both the partitioned and global approaches, but it only supports such variants of those algorithms which allow non-preemptive execution. A variant of Global EDF termed Global Suspendable Non-preemptive EDF was implemented prior to implementing FMLP. The existing simulator provided a set of single-core and some basic multi-core scheduling algorithms for scheduling real-time task sets. No schedulability analysis was implemented in the previous work. So, as part of this thesis, the schedulability analysis for single core scheduling algorithms has been implemented. A schedulability analysis for the partitioned approach of multi-core scheduling has also been provided for systems where a single-core scheduling algorithm runs on each partition. The updated simulation software also supports self-suspension of tasks for a specified duration

    Extending rely-guarantee thinking to handle real-time scheduling

    Get PDF
    The reference point for developing any artefact is its specification; to develop software for- mally, a formal specification is required. For sequential programs, pre and post conditions (together with abstract objects) suffice; rely and guarantee conditions extend the scope of formal development approaches to tackle concurrency. In addition, real-time systems need ways of both requiring progress and relating that progress to some notion of time. This paper extends rely-guarantee ideas to cope with specifications of—and assumptions about— real-time schedulers. Furthermore it shows how the approach helps identify and specify fault-tolerance aspects of such schedulers by systematically challenging the assumption

    MCFlow: Middleware for Mixed-Criticality Distributed Real-Time Systems

    Get PDF
    Traditional fixed-priority scheduling analysis for periodic/sporadic task sets is based on the assumption that all tasks are equally critical to the correct operation of the system. Therefore, every task has to be schedulable under the scheduling policy, and estimates of tasks\u27 worst case execution times must be conservative in case a task runs longer than is usual. To address the significant under-utilization of a system\u27s resources under normal operating conditions that can arise from these assumptions, several \emph{mixed-criticality scheduling} approaches have been proposed. However, to date there has been no quantitative comparison of system schedulability or run-time overhead for the different approaches. In this dissertation, we present what is to our knowledge the first side-by-side implementation and evaluation of those approaches, for periodic and sporadic mixed-criticality tasks on uniprocessor or distributed systems, under a mixed-criticality scheduling model that is common to all these approaches. To make a fair evaluation of mixed-criticality scheduling, we also address some previously open issues and propose modifications to improve schedulability and correctness of particular approaches. To facilitate the development and evaluation of mixed-criticality applications, we have designed and developed a distributed real-time middleware, called MCFlow, for mixed-criticality end-to-end tasks running on multi-core platforms. The research presented in this dissertation provides the following contributions to the state of the art in real-time middleware: (1) an efficient component model through which dependent subtask graphs can be configured flexibly for execution within a single core, across cores of a common host, or spanning multiple hosts; (2) support for optimizations to inter-component communication to reduce data copying without sacrificing the ability to execute subtasks in parallel; (3) a strict separation of timing and functional concerns so that they can be configured independently; (4) an event dispatching architecture that uses lock free algorithms where possible to reduce memory contention, CPU context switching, and priority inversion; and (5) empirical evaluations of MCFlow itself and of different mixed criticality scheduling approaches both with a single host and end-to-end across multiple hosts. The results of our evaluation show that in terms of basic distributed real-time behavior MCFlow performs comparably to the state of the art TAO real-time object request broker when only one core is used and outperforms TAO when multiple cores are involved. We also identify and categorize different use cases under which different mixed criticality scheduling approaches are preferable

    Embedded System Design

    Get PDF
    A unique feature of this open access textbook is to provide a comprehensive introduction to the fundamental knowledge in embedded systems, with applications in cyber-physical systems and the Internet of things. It starts with an introduction to the field and a survey of specification models and languages for embedded and cyber-physical systems. It provides a brief overview of hardware devices used for such systems and presents the essentials of system software for embedded systems, including real-time operating systems. The author also discusses evaluation and validation techniques for embedded systems and provides an overview of techniques for mapping applications to execution platforms, including multi-core platforms. Embedded systems have to operate under tight constraints and, hence, the book also contains a selected set of optimization techniques, including software optimization techniques. The book closes with a brief survey on testing. This fourth edition has been updated and revised to reflect new trends and technologies, such as the importance of cyber-physical systems (CPS) and the Internet of things (IoT), the evolution of single-core processors to multi-core processors, and the increased importance of energy efficiency and thermal issues

    Exact Speedup Factors and Sub-Optimality for Non-Preemptive Scheduling

    Get PDF
    Fixed priority scheduling is used in many real-time systems; however, both preemptive and non-preemptive variants (FP-P and FP-NP) are known to be sub-optimal when compared to an optimal uniprocessor scheduling algorithm such as preemptive earliest deadline first (EDF-P). In this paper, we investigate the sub-optimality of fixed priority non-preemptive scheduling. Specifically, we derive the exact processor speed-up factor required to guarantee the feasibility under FP-NP (i.e. schedulability assuming an optimal priority assignment) of any task set that is feasible under EDF-P. As a consequence of this work, we also derive a lower bound on the sub-optimality of non-preemptive EDF (EDF-NP). As this lower bound matches a recently published upper bound for the same quantity, it closes the exact sub-optimality for EDF-NP. It is known that neither preemptive, nor non-preemptive fixed priority scheduling dominates the other, in other words, there are task sets that are feasible on a processor of unit speed under FP-P that are not feasible under FP-NP and vice-versa. Hence comparing these two algorithms, there are non-trivial speedup factors in both directions. We derive the exact speed-up factor required to guarantee the FP-NP feasibility of any FP-P feasible task set. Further, we derive the exact speed-up factor required to guarantee FP-P feasibility of any constrained-deadline FP-NP feasible task set
    • …
    corecore