67 research outputs found

    Decoupling Criticality and Importance in Mixed-Criticality Scheduling

    Get PDF
    Research on mixed-criticality scheduling has flourished since Vestal’s seminal 2007 paper, but more efforts are needed in order to make these results more suitable for industrial adoption and robust and versatile enough to influence the evolution of future certification standards in keeping up with the times. With this in mind, we introduce a more refined task model, in line with the fundamental principles of Vestal’s mode-based adaptive mixed-criticality model, which allows a task’s criticality and its importance to be specified independently from each other. A task’s importance is the criterion that determines its presence in different system modes. Meanwhile, the task’s criticality (reflected in its Safety Integrity Level (SIL) and defining the rules for its software development process), prescribes the degree of conservativeness for the task’s estimated WCET during schedulability testing. We indicate how such a task model can help resolve some of the perceived weaknesses of the Vestal model, in terms of how it is interpreted, and demonstrate how the existing scheduling tests for the classic variant’s of Vestal’s model can be mapped to the new task model essentially without changes.info:eu-repo/semantics/publishedVersio

    Multiprocessor Scheduling meets the Industrial Wireless

    Get PDF
    This survey covers schedulability analysis approaches that have been recently proposed for multi-hop and multi-channel wireless sensor and actuator networks in the industrial control process domain. It reviews results with a focus on WirelessHART-like networks. The paper address the mapping of multi-channel transmission scheduling to multiprocessor scheduling theory, and recognize it as the key aspect of the research direction covered by this survey. It also provides a taxonomy of the existing approaches concerning this direction, and discuss its main features and evolution. The survey identifies open issues, key research challenges, and future directions

    220801

    Get PDF
    Multicore platforms are being increasingly adopted in Cyber-Physical Systems (CPS) due to their advantages over single-core processors, such as raw computing power and energy efficiency. Typically, multicore platforms use a shared memory bus that connects the cores to the off-chip main memory. This sharing of memory bus may cause tasks running on different cores to compete for access to the main memory whenever data/instructions are need to be read/written from/to the main memory. Such competition is problematic, as it may cause variations in the execution time of tasks in a non-deterministic way. To reduce the complexity of analysing this problem, the 3-phase task model was proposed that divides tasks' executions into distinct memory and execution phases. The distinctive memory phases are then scheduled to eliminate/minimize main memory contention between concurrently executing tasks. However, 3-phase tasks running on different cores may still compete to access the shared memory bus/main memory in order to execute memory phases. This paper presents a partitioned scheduling-based approach that allows one to derive memory bus contention-aware worst-case response time of tasks that follow the 3-phase task model. In particular, the bus-contention analysis is derived by considering two memory access models, i.e., (i) dedicated memory access model, where a core having allowed to access the main memory via memory bus is permitted to execute more than one memory phase, and (ii) fair memory access model, that restrict each core to execute only one memory phase in its allocated bus access. Both these models represent different system and application requirements, and the resulting bus contention of tasks may vary depending on the considered model. To evaluate the effectiveness of the proposed bus contention analysis, we compare its performance against an existing analysis in the state-of-the-art by performing (i) case-study experiments, using benchmarks from the Mälardalen Benchmark suite, and (ii) empirical evaluation using synthetic task sets. Results show that our proposed analysis can improve task set schedulability of 3-phase tasks by up to 88 percentage points.This work was partially supported by European Union’s Horizon 2020 -The EU Framework Programme for Research and Innovation 2014-2020, under grant agreement No. 732505. Project “TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER000020” 845 financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement; also by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDP/UIDB/04234/2020); by FCT and the Portuguese National Innovation Agency (ANI), under the CMU Portugal partnership, through the European Regional Development Fund (ERDF) of the Operational Competitiveness Programme and Internationaliza850 tion (COMPETE 2020), under the PT2020 Partnership Agreement, within project FLOYD (POCI-01-0247- FEDER-045912), also by FCT under PhD grant 2020.09532.BD.info:eu-repo/semantics/publishedVersio

    Low-Overhead Online Assessment of Timely Progress as a System Commodity

    Get PDF

    220606

    Get PDF
    To reduce the latency of time-sensitive flows in Ethernet networks, the IEEE TSN Task Group introduced the IEEE 802.1Qbu Standard, which specifies a 1-level preemption scheme for IEEE 802.1 networks. Recently, serious limitations of this scheme w.r.t. flows responsiveness were exposed and the so-called multi-level preemption approach was proposed to address these drawbacks. As is the case with most, if not all, real-time and/or time-sensitive preemptive systems, an appropriate priority-to-flow assignment policy plays a central role in the resulting performance of both 1-level and multi-level preemption schemes to avoid the over-provisioning and/or the sub-optimal use of hardware resources. Yet on another front, the multi-level preemption scheme raises new configuration challenges. Specifically, the right number of preemption level(s) to enable for swift transmission of flows; and the flow-to-preemption-class assignment synthesis remain open problems. To the best of our knowledge, there is no prior work in the literature addressing these important challenges. In this work, we address these three challenges. We demonstrate the applicability of our proposed solution by using both synthetic and real-life use-cases. Our experimental results show that multi-level preemption schemes improve the schedulability of flows by over 12% as compared to a 1-level preemption scheme, and at a higher abstraction level, the proposed configuration framework improves the schedulability of flows by up to 6% as compared to the dominant Deadline Monotonic Priority Ordering.This work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDP/UIDB/04234/2020); also by FCT through the European Social Fund (ESF) and the Regional Operational Programme (ROP) Norte 2020, under grant 2020.09636.BD.info:eu-repo/semantics/publishedVersio

    Mixed Criticality on Multi-cores Accounting for Resource Stress and Resource Sensitivity

    Get PDF
    The most significant trend in real-time systems design in recent years has been the adoption of multi-core processors and the accompanying integration of functionality with different criticality levels onto the same hardware platform. This paper integrates mixed criticality aspects and assurances within a multi-core system model. It bounds cross-core contention and interference by considering the impact on task execution times due to the stress on shared hardware resources caused by co-runners, and each task’s sensitivity to that resource stress. Schedulability analysis is derived for four mixed criticality scheduling schemes based on partitioned fixed priority preemptive scheduling. Each scheme provides robust timing guarantees for high criticality tasks, ensuring that their timing constraints cannot be jeopardized by the behavior or misbehavior of low criticality tasks

    Evaluation of the Age Latency of a Real-Time Communicating System Using the LET Paradigm

    Get PDF
    Automotive and avionics embedded systems are usually composed of several tasks that are subject to complex timing constraints. In this context, the LET paradigm was introduced to improve the determinism of a system of tasks that communicate data through shared variables. The age latency corresponds to the maximum time for the propagation of data in these systems. Its precise evaluation is an important and challenging question for the design of these systems. We consider in this paper a set of multi-periodic tasks that communicate data following the LET paradigm. Our main contribution is the development of mathematical and algorithmic tools to model precisely the dependency between tasks executions to experiment with an original methodology for computing the age latency of the system. These tools allow to handle the whole graph instead of particular chains and to extract automatically the critical parts of the graph. Experiments on randomly generated graphs indicate that systems with up to 90 periodic tasks and a hyperperiod bounded by 100 can be handled within a reasonable amount of time

    Software Fault Tolerance in Real-Time Systems: Identifying the Future Research Questions

    Get PDF
    Tolerating hardware faults in modern architectures is becoming a prominent problem due to the miniaturization of the hardware components, their increasing complexity, and the necessity to reduce the costs. Software-Implemented Hardware Fault Tolerance approaches have been developed to improve the system dependability to hardware faults without resorting to custom hardware solutions. However, these come at the expense of making the satisfaction of the timing constraints of the applications/activities harder from a scheduling standpoint. This paper surveys the current state of the art of fault tolerance approaches when used in the context real-time systems, identifying the main challenges and the cross-links between these two topics. We propose a joint scheduling-failure analysis model that highlights the formal interactions among software fault tolerance mechanisms and timing properties. This model allows us to present and discuss many open research questions with the final aim to spur the future research activities

    The SRP Resource Sharing Protocol for Self-Suspending Tasks

    Get PDF
    Motivated by the increasingly wide adoption of realtime workload with self-suspending behaviors, and the relevance of mechanisms to handle mutually-exclusive shared resources, this paper takes a new look at locking protocols for self-suspending tasks under uniprocessor fixed-priority scheduling. Pitfalls when integrating the widely-adopted Stack Resource Policy (SRP) with self-suspending tasks are firstly illustrated, and then a new finegrained SRP analysis is presented. Next, a new locking protocol, named SRP-SS, is proposed to overcome the limitations of the original SRP. The SRP-SS is a generalization of the SRP to cope with the specificities of self-suspending tasks. It therefore reduces to the SRP under some configurations and hence theoretically dominates the SRP. It also ensures backward compatibility for applications developed specifically for the SRP. The SRP-SS comes with its own schedulability analysis and configuration algorithm. The performances of the SRP and SRP-SS are finally studied by means of large-scale schedulability experiments.info:eu-repo/semantics/publishedVersio

    Interference-Aware Schedulability Analysis and Task Allocation for Multicore Hard Real-Time Systems

    Full text link
    [EN] There has been a trend towards using multicore platforms for real-time embedded systems due to their high computing performance. In the scheduling of a multicore hard real-time system, there are interference delays due to contention of shared hardware resources. The main sources of interference are memory, cache memory, and the shared memory bus. These interferences are a great source of unpredictability and they are not always taken into account. Recent papers have proposed task models and schedulability algorithms to account for this interference delay. The aim of this paper is to provide a schedulability analysis for a task model that incorporates interference delay, for both fixed and dynamic priorities. We assume an implicit deadline task model. We rely on a task model where this interference is integrated in a general way, without depending on a specific type of hardware resource. There are similar approaches, but they consider fixed priorities. An allocation algorithm to minimise this interference (Imin) is also proposed and compared with existing allocators. The results show how Imin has the best rates in terms of percentages of schedulability and increased utilisation. In addition, Imin presents good results in terms of solution times.This work was supported under Grant PLEC2021-007609 funded by MCIN/ AEI/ 10.13039/ 501100011033 and by the "European Union NextGenerationEU/PRTR".Aceituno-Peinado, JM.; Guasque Ortega, A.; Balbastre, P.; SimĂł Ten, JE.; Crespo, A. (2022). Interference-Aware Schedulability Analysis and Task Allocation for Multicore Hard Real-Time Systems. Electronics. 11(9):1-21. https://doi.org/10.3390/electronics1109131312111
    • …
    corecore