51 research outputs found

    Response-Time Analysis of Conditional DAG Tasks in Multiprocessor Systems

    Get PDF
    Different task models have been proposed to represent the parallel structure of real-time tasks executing on manycore platforms: fork/join, synchronous parallel, DAG-based, etc. Despite different schedulability tests and resource augmentation bounds are available for these task systems, we experience difficulties in applying such results to real application scenarios, where the execution flow of parallel tasks is characterized by multiple (and nested) conditional structures. When a conditional branch drives the number and size of sub-jobs to spawn, it is hard to decide which execution path to select for modeling the worst-case scenario. To circumvent this problem, we integrate control flow information in the task model, considering conditional parallel tasks (cp-tasks) represented by DAGs composed of both precedence and conditional edges. For this task model, we identify meaningful parameters that characterize the schedulability of the system, and derive efficient algorithms to compute them. A response time analysis based on these parameters is then presented for different scheduling policies. A set of simulations shows that the proposed approach allows efficiently checking the schedulability of the addressed systems, and that it significantly tightens the schedulability analysis of non-conditional (e.g., Classic DAG) tasks over existing approaches

    Memory-processor co-scheduling in fixed priority systems

    Get PDF
    A major obstacle towards the adoption of multi-core platforms for real-time systems is given by the difficulties in characterizing the interference due to memory contention. The simple fact that multiple cores may simultaneously access shared memory and communication resources introduces a significant pessimism in the timing and schedulability analysis. To counter this problem, predictable execution models have been proposed splitting task executions into two consecutive phases: a memory phase in which the required instruction and data are pre-fetched to local memory (M-phase), and an execution phase in which the task is executed with no memory contention (C-phase). Decoupling memory and execution phases not only simplifies the timing analysis, but it also allows a more efficient (and predictable) pipelining of memory and execution phases through proper co-scheduling algorithms. In this paper, we take a further step towards the design of smart co-scheduling algorithms for sporadic real-time tasks complying with the M/C (memory-computation) model. We provide a theoretical framework that aims at tightly characterizing the schedulability improvement obtainable with the adopted M/C task model on a single-core systems. We identify a tight critical instant for M/C tasks scheduled with fixed priority, providing an exact response-time analysis with pseudo-polynomial complexity. We show in our experiments that a significant schedulability improvement may be obtained with respect to classic execution models, placing an important building block towards the design of more efficient partitioned multi-core systems

    Compensating Adaptive Mixed Criticality Scheduling

    Get PDF
    The majority of prior academic research into mixed criticality systems assumes that if high-criticality tasks continue to execute beyond the execution time limits at which they would normally finish, then further workload due to low-criticality tasks may be dropped in order to ensure that the high-criticality tasks can still meet their deadlines. Industry, however, takes a different view of the importance of low-criticality tasks, with many practical systems unable to tolerate the abandonment of such tasks. In this paper, we address the challenge of supporting genuinely graceful degradation in mixed criticality systems, thus avoiding the abandonment problem. We explore the Compensating Adaptive Mixed Criticality (C-AMC) scheduling scheme. C-AMC ensures that both high- and low-criticality tasks meet their deadlines in both normal and degraded modes. Under C-AMC, jobs of low-criticality tasks, released in degraded mode, execute imprecise versions that provide essential functionality and outputs of sufficient quality, while also reducing the overall workload. This compensates, at least in part, for the overload due to the abnormal behavior of high-criticality tasks. C-AMC is based on fixed-priority preemptive scheduling and hence provides a viable migration path along which industry can make an evolutionary transition from current practice

    Mixed Criticality on Multi-cores Accounting for Resource Stress and Resource Sensitivity

    Get PDF
    The most significant trend in real-time systems design in recent years has been the adoption of multi-core processors and the accompanying integration of functionality with different criticality levels onto the same hardware platform. This paper integrates mixed criticality aspects and assurances within a multi-core system model. It bounds cross-core contention and interference by considering the impact on task execution times due to the stress on shared hardware resources caused by co-runners, and each task’s sensitivity to that resource stress. Schedulability analysis is derived for four mixed criticality scheduling schemes based on partitioned fixed priority preemptive scheduling. Each scheme provides robust timing guarantees for high criticality tasks, ensuring that their timing constraints cannot be jeopardized by the behavior or misbehavior of low criticality tasks

    Interconnection optimization for multi-cluster avionics networks

    Get PDF
    National audienceThe increasing complexity and heterogeneity of avionics networks make resource optimization a challenging task. In contrast to many previous approaches pursuing the optimization of traffic-source mapping and backbone network analysis, our work presented herein mainly focuses on the optimization of interconnection devices for multi-cluster avionics networks. In this paper, we introduce an optimized interconnection device, integrating novel frame packing strategies and schedulability analysis to enhance the communications between an AFDX-like backbone network and various peripheral sensor/actuator networks in terms of resource savings. The performance analysis conducted on a representative avionics communication architecture highlights the efficiency of our proposal to save resources particularly consumed bandwidth. These latter is considered as an important feature for avionics applications to guarantee easy incremental design during the long lifetime of an aircraft

    A Design That Incorporates Adaptive Reservation into Mixed-Criticality Systems

    Get PDF

    Memory-Processor Co-Scheduling in Fixed Priority Systems

    Get PDF
    ABSTRACT A major obstacle towards the adoption of multi-core platforms for real-time systems is given by the difficulties in characterizing the interference due to memory contention. The simple fact that multiple cores may simultaneously access shared memory and communication resources introduces a significant pessimism in the timing and schedulability analysis. To counter this problem, predictable execution models have been proposed splitting task executions into two consecutive phases: a memory phase in which the required instruction and data are pre-fetched to local memory (Mphase), and an execution phase in which the task is executed with no memory contention (C-phase). Decoupling memory and execution phases not only simplifies the timing analysis, but it also allows a more efficient (and predictable) pipelining of memory and execution phases through proper co-scheduling algorithms. In this paper, we take a further step towards the design of smart co-scheduling algorithms for sporadic real-time tasks complying with the M/C (memory-computation) model. We provide a theoretical framework that aims at tightly characterizing the schedulability improvement obtainable with the adopted M/C task model on a single-core systems. We identify a tight critical instant for M/C tasks scheduled with fixed priority, providing an exact response-time analysis with pseudo-polynomial complexity. We show in our experiments that a significant schedulability improvement may be obtained with respect to classic execution models, placing an important building block towards the design of more efficient partitioned multi-core systems

    Optimal Dataflow Scheduling on a Heterogeneous Multiprocessor With Reduced Response Time Bounds

    Get PDF
    Heterogeneous computing platforms with multiple types of computing resources have been widely used in many industrial systems to process dataflow tasks with pre-defined affinity of tasks to subgroups of resources. For many dataflow workloads with soft real-time requirements, guaranteeing fast and bounded response times is often the objective. This paper presents a new set of analysis techniques showing that a classical real-time scheduler, namely earliest-deadline first (EDF), is able to support dataflow tasks scheduled on such heterogeneous platforms with provably bounded response times while incurring no resource capacity loss, thus proving EDF to be an optimal solution for this scheduling problem. Experiments using synthetic workloads with widely varied parameters also demonstrate that the magnitude of the response time bounds yielded under the proposed analysis is reasonably small under all scenarios. Compared to the state-of-the-art soft real-time analysis techniques, our test yields a 68% reduction on response time bounds on average. This work demonstrates the potential of applying EDF into practical industrial systems containing dataflow-based workloads that desire guaranteed bounded response times

    Improving Responsiveness of Time-Sensitive Applications by Exploiting Dynamic Task Dependencies

    Get PDF
    In this paper, a mechanism is presented for reducing priority inversion in multi-programmed computing systems. Contrarily to well-known approaches from the literature, this paper tackles cases where the dependency relationships among tasks cannot be known in advance to the operating system (OS). The presented mechanism allows tasks to explicitly declare said relationships, enabling the OS scheduler to take advantage of such information and trigger priority inheritance, resulting in reduced priority inversion. We present the prototype implementation of the concept within the Linux kernel, in the form of modifications to the standard POSIX condition variables code, along with an extensive evaluation including a quantitative assessment of the benefits for applications making use of the technique, as well as comprehensive overhead measurements. Also, we present an associated technique for theoretical schedulability analysis of a system using the new mechanism, which is useful to determine whether all tasks can meet their deadlines or not, in the specific scenario of tasks interacting only through remote procedure calls, and under partitioned scheduling

    Tracking coherence-related contention delays in real-time multicore systems

    Get PDF
    The prevailing use of multicores in Embedded Critical Systems (ECS) is multi-application workloads in which independent applications run in different cores with data sharing restricted to the communication between applications and the real-time operating system. However, thread-level parallelism is increasingly used, e.g., OpenMP, in ECS to improve individual applications' performance. At the hardware level, we are witnessing increased research efforts to master and improve multicore cache coherence that plays a key role enabling efficient data sharing among threads. Despite these efforts, the limited information provided by performance monitoring counters on cache coherence limits the understanding of coherence's impact on tasks execution time and hence, poses severe constraints to estimate tight worst-case execution time bounds. In this line, this work contributes with an analysis of the impact that cache coherence can have on application timing behavior, and a new set of low-overhead performance monitoring counters that can be used to track the coherence-related contention that different threads can cause on each other when sharing data. Our results show that the proposed performance monitoring counters effectively capture all coherence-related contention that tasks can suffer and hence are key for parallel software timing validation and verification in ECS. Furthermore, they help application optimization by providing key information about data sharing among the application threads.The research leading to these results has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772773). This work has also been partially supported by Grant PID2019-107255GB-C21 funded by MCIN/AEI/ 10.13039/501100011033.Peer ReviewedPostprint (author's final draft
    • …
    corecore