8 research outputs found

    A new approach for global synchronization in hierarchical scheduled real-time systems

    Get PDF
    We present our ongoing work to improve an existing synchronization protocol SIRAP for hierarchically scheduled real-time systems. A less pessimistic schedulability analysis is presented which can make the SIRAP protocol more efficient in terms of calculated CPU resource needs. In addition and for the same reason, an extended version of SIRAP is proposed, which decreases the interference from lower priority tasks. The new version of SIRAP has the potential to make the protocol more resource efficient than the original one

    Real-time hierarchical systems with arbitrary scheduling at global level

    Full text link
    [EN] Partitioned architectures isolate software components into independent partitions whose execution will not interfere with other partitions, preserving temporal and spatial isolation. Hierarchical scheduling can effectively be used to schedule these systems. Schedulability analysis of hierarchical real-time systems is based on prior knowledge of the local and the global scheduling algorithms. In a partitioned system with safety and security issues and certification assurance levels, global scheduling is usually generated using a static table. Therefore, each partition must allocate task jobs only in the temporal windows reserved for that partition. Even if the static table can come originally from a periodic server or other scheduling policy, the final plan may be modified due to changes in the system requirements. As a consequence, the CPU assignment to a partition does not have to correspond to any known policy. In this case, it is not possible to use existing scheduling analysis for hierarchical systems. This paper studies a new scheduling problem: a hierarchical system in which global policy is not known but provided as a set of arbitrary time windows.This work has been funded by the Spanish government under grant TIN2014-56158-C4-1-P-AR and by the European Commission under FP7-ICT-2013.3.4 Programme with grant 610640Guasque Ortega, A.; Balbastre, P.; Crespo, A. (2016). Real-time hierarchical systems with arbitrary scheduling at global level. Journal of Systems and Software. 119:70-86. https://doi.org/10.1016/j.jss.2016.05.040S708611

    Flattening Hierarchical Scheduling.

    Get PDF
    ABSTRACT Recently, the application of virtual-machine technology to integrate real-time systems into a single host has received significant attention and caused controversy. Drawing two examples from mixed-criticality systems, we demonstrate that current virtualization technology, which handles guest scheduling as a black box, is incompatible with this modern scheduling discipline. However, there is a simple solution by exporting sufficient information for the host scheduler to overcome this problem. We describe the problem, the modification required on the guest and show on the example of two practical real-time operating systems how flattening the hierarchical scheduling problem resolves the issue. We conclude by showing the limitations of our technique at the current state of our research

    Bounding the Number of Self-Blocking Occurrences of SIRAP

    Full text link

    Secure Virtualization of Latency-Constrained Systems

    Get PDF
    Virtualization is a mature technology in server and desktop environments where multiple systems are consolidate onto a single physical hardware platform, increasing the utilization of todays multi-core systems as well as saving resources such as energy, space and costs compared to multiple single systems. Looking at embedded environments reveals that many systems use multiple separate computing systems inside, including requirements for real-time and isolation properties. For example, modern high-comfort cars use up to a hundred embedded computing systems. Consolidating such diverse configurations promises to save resources such as energy and weight. In my work I propose a secure software architecture that allows consolidating multiple embedded software systems with timing constraints. The base of the architecture builds a microkernel-based operating system that supports a variety of different virtualization approaches through a generic interface, supporting hardware-assisted virtualization and paravirtualization as well as multiple architectures. Studying guest systems with latency constraints with regards to virtualization showed that standard techniques such as high-frequency time-slicing are not a viable approach. Generally, guest systems are a combination of best-effort and real-time work and thus form a mixed-criticality system. Further analysis showed that such systems need to export relevant internal scheduling information to the hypervisor to support multiple guests with latency constraints. I propose a mechanism to export those relevant events that is secure, flexible, has good performance and is easy to use. The thesis concludes with an evaluation covering the virtualization approach on the ARM and x86 architectures and two guest operating systems, Linux and FreeRTOS, as well as evaluating the export mechanism

    Dynamics analysis and integrated design of real-time control systems

    Get PDF
    Real-time control systems are widely deployed in many applications. Theory and practice for the design and deployment of real-time control systems have evolved significantly. From the design perspective, control strategy development has been the focus of the research in the control community. In order to develop good control strategies, process modelling and analysis have been investigated for decades, and stability analysis and model-based control have been heavily studied in the literature. From the implementation perspective, real-time control systems require timeliness and predictable timing behaviour in addition to logical correctness, and a real-time control system may behave very differently with different software implementations of the control strategies on a digital controller, which typically has limited computing resources. Most current research activities on software implementations concentrate on various scheduling methodologies to ensure the schedulability of multiple control tasks in constrained environments. Recently, more and more real-time control systems are implemented over data networks, leading to increasing interest worldwide in the design and implementation of networked control systems (NCS). Major research activities in NCS include control-oriented and scheduling-oriented investigations. In spite of significant progress in the research and development of real-time control systems, major difficulties exist in the state of the art. A key issue is the lack of integrated design for control development and its software implementation. For control design, the model-based control technique, the current focus of control research, does not work when a good process model is not available or is too complicated for control design. For control implementation on digital controllers running multiple tasks, the system schedulability is essential but is not enough; the ultimate objective of satisfactory quality-of-control (QoC) performance has not been addressed directly. For networked control, the majority of the control-oriented investigations are based on two unrealistic assumptions about the network induced delay. The scheduling-oriented research focuses on schedulability and does not directly link to the overall QoC of the system. General solutions with direct QoC consideration from the network perspective to the challenging problems of network delay and packet dropout in NCS have not been found in the literature. This thesis addresses the design and implementation of real-time control systems with regard to dynamics analysis and integrated design. Three related areas have been investigated, namely control development for controllers, control implementation and scheduling on controllers, and real-time control in networked environments. Seven research problems are identified from these areas for investigation in this thesis, and accordingly seven major contributions have been claimed. Timing behaviour, quality of control, and integrated design for real-time control systems are highlighted throughout this thesis. In control design, a model-free control technique, pattern predictive control, is developed for complex reactive distillation processes. Alleviating the requirement of accurate process models, the developed control technique integrates pattern recognition, fuzzy logic, non-linear transformation, and predictive control into a unified framework to solve complex problems. Characterising the QoC indirectly with control latency and jitter, scheduling strategies for multiple control tasks are proposed to minimise the latency and/or jitter. Also, a hierarchical, QoC driven, and event-triggering feedback scheduling architecture is developed with plug-ins of either the earliest-deadline-first or fixed priority scheduling. Linking to the QoC directly, the architecture minimises the use of computing resources without sacrifice of the system QoC. It considers the control requirements, but does not rely on the control design. For real-time NCS, the dynamics of the network delay are analysed first, and the nonuniform distribution and multi-fractal nature of the delay are revealed. These results do not support two fundamental assumptions used in existing NCS literature. Then, considering the control requirements, solutions are provided to the challenging NCS problems from the network perspective. To compensate for the network delay, a real-time queuing protocol is developed to smooth out the time-varying delay and thus to achieve more predictable behaviour of packet transmissions. For control packet dropout, simple yet effective compensators are proposed. Finally, combining the queuing protocol, the packet loss compensation, the configuration of the worst-case communication delay, and the control design, an integrated design framework is developed for real-time NCS. With this framework, the network delay is limited to within a single control period, leading to simplified system analysis and improved QoC

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system

    Analysis of Hierarchical EDF Pre-emptive Scheduling

    No full text
    This paper focuses on scheduling different hard real-time applications on a uniprocessor when the earliest deadline first algorithm is used as the local scheduler, and the global scheduler of the system could be fixed priority (FP) or earliest deadline first (EDF). Each application task could be periodic or sporadic, bound or unbound, with arbitrary relative deadline which could be less than, equal to or greater than its period. A number of different server types are considered. This paper presents an exact and efficient schedulability test for the application tasks based on the capacity demand criterion when the global scheduler could be FP or EDF, in some cases, it is necessary and sufficient. Schedulability tests which are necessary and sufficient for several types of dynamic servers are presented when the global scheduler is EDF
    corecore