52 research outputs found

    Semi-partitioned mixed-criticality scheduling

    Get PDF
    Paper presented at the 30TH INTERNATIONAL CONFERENCE ON ARCHITECTURE OF COMPUTING SYSTEMS (ARCS 2017). 3 to 6, Apr, 2017, Session 6: Scheduling. Vienna, Austria.Scheduling isolation in mixed-criticality systems is challenging without sacrificing performance. In response, we propose a scheduling approach that combines server-based semi-partitioning and deadline scaling. Semi-partitioning (whereby only some tasks migrate, in a carefully managed manner), hitherto used in single criticality systems, offers good performance with low overheads. Deadline-scaling selectively prioritizes high-criticality tasks in parts of the schedule to ensure their deadlines are met even in rares case of execution time overrun. Our new algorithm NPS-F-MC brings semi-partitioning to mixed-criticality scheduling and uses Ekberg and Yi’s state-of-the-art deadline scaling approach. It ensures scheduling isolation among different-criticality tasks and only allows low-criticality task migration. We also explore variants that disallow migration entirely or relax the isolation between different criticalities (SP-EKB) in order to evaluate the performance tradeoffs associated with more flexible or rigid safety and isolation requirements.info:eu-repo/semantics/publishedVersio

    Software parametrization of feasible reconfigurable real-time systems under energy and dependency constraints

    Get PDF
    Enforcing temporal constraints is necessary to maintain the correctness of a realtime system. However, a real-time system may be enclosed by many factors and constraints that lead to different challenges to overcome. In other words, to achieve the real-time aspects, these systems face various challenges particularly in terms of architecture, reconfiguration property, energy consumption, and dependency constraints. Unfortunately, the characterization of real-time task deadlines is a relatively unexplored problem in the real-time community. Most of the literature seems to consider that the deadlines are somehow provided as hard assumptions, this can generate high costs relative to the development time if these deadlines are violated at runtime. In this context, the main aim of this thesis is to determine the effective temporal properties that will certainly be met at runtime under well-defined constraints. We went to overcome these challenges in a step-wise manner. Each time, we elected a well-defined subset of challenges to be solved. This thesis deals with reconfigurable real-time systems in mono-core and multi-core architectures. First, we propose a new scheduling strategy based on configuring feasible scheduling of software tasks of various types (periodic, sporadic, and aperiodic) and constraints (hard and soft) mono-core architecture. Then, the second contribution deals with reconfigurable real-time systems in mono-core under energy and resource sharing constraints. Finally, the main objective of the multi-core architecture is achieved in a third contribution.Das Erzwingen zeitlicher Beschränkungen ist notwendig,um die Korrektheit eines Echtzeitsystems aufrechtzuerhalten. Ein Echtzeitsystem kann jedoch von vielen Faktoren und Beschränkungen umgeben sein, die zu unterschiedlichen Herausforderungen führen, die es zu bewältigen gilt. Mit anderen Worten, um die zeitlichen Aspekte zu erreichen, können diese Systeme verschiedenen Herausforderungen gegenüberstehen, einschliesslich Architektur, Rekonfigurationseigenschaft, Energie und Abhängigkeitsbeschränkungen. Leider ist die Charakterisierung von Echtzeit-Aufgabenterminen ein relativ unerforschtes Problem in der Echtzeit-Community. Der grösste Teil der Literatur geht davon aus, dass die Fristen (Deadlines) irgendwie als harte Annahmen bereitgestellt werden, was im Verhältnis zur Entwicklungszeit hohe Kosten verursachen kann, wenn diese Fristen zur Laufzeit verletzt werden. In diesem Zusammenhang ist das Hauptziel dieser Arbeit, die effektiven zeitlichen Eigenschaften zu bestimmen, die zur Laufzeit unter wohldefinierten Randbedingungen mit Sicherheit erfüllt werden. Wir haben diese Herausforderungen schrittweise gemeistert. Jedes Mal haben wir eine wohldefinierte Teilmenge von Herausforderungen ausgewählt, die es zu lösen gilt. Zunächst schlagen wir eine neue Scheduling-Strategie vor, die auf der Konfiguration eines durchführbaren Scheduling von Software-Tasks verschiedener Typen (periodisch, sporadisch und aperiodisch) und Beschränkungen (hart und weich) einer Mono-Core-Architektur basiert. Der zweite Beitrag befasst sich dann mit rekonfigurierbaren Echtzeitsystemen in Mono-Core unter Energie und Ressourcenteilungsbeschränkungen. Abschliessend wird in einem dritten Beitrag das Verfahren auf Multi-Core-Architekturen erweitert

    Mixed Criticality Systems - A Review : (13th Edition, February 2022)

    Get PDF
    This review covers research on the topic of mixed criticality systems that has been published since Vestal’s 2007 paper. It covers the period up to end of 2021. The review is organised into the following topics: introduction and motivation, models, single processor analysis (including job-based, hard and soft tasks, fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, related topics, realistic models, formal treatments, systems issues, industrial practice and research beyond mixed-criticality. A list of PhDs awarded for research relating to mixed-criticality systems is also included

    HSM : a hybrid slowdown model for multitasking GPUs

    Get PDF
    Graphics Processing Units (GPUs) are increasingly widely used in the cloud to accelerate compute-heavy tasks. However, GPU-compute applications stress the GPU architecture in different ways - leading to suboptimal resource utilization when a single GPU is used to run a single application. One solution is to use the GPU in a multitasking fashion to improve utilization. Unfortunately, multitasking leads to destructive interference between co-running applications which causes fairness issues and Quality-of-Service (QoS) violations. We propose the Hybrid Slowdown Model (HSM) to dynamically and accurately predict application slowdown due to interference. HSM overcomes the low accuracy of prior white-box models, and training and implementation overheads of pure black-box models, with a hybrid approach. More specifically, the white-box component of HSM builds upon the fundamental insight that effective bandwidth utilization is proportional to DRAM row buffer hit rate, and the black-box component of HSM uses linear regression to relate row buffer hit rate to performance. HSM accurately predicts application slowdown with an average error of 6.8%, a significant improvement over the current state-of-the-art. In addition, we use HSM to guide various resource management schemes in multitasking GPUs: HSM-Fair significantly improves fairness (by 1.59x on average) compared to even partitioning, whereas HSM-QoS improves system throughput (by 18.9% on average) compared to proportional SM partitioning while maintaining the QoS target for the high-priority application in challenging mixed memory/compute-bound multi-program workloads

    Mixed-criticality real-time task scheduling with graceful degradation

    Get PDF
    ”The mixed-criticality real-time systems implement functionalities of different degrees of importance (or criticalities) upon a shared platform. In traditional mixed-criticality systems, under a hi mode switch, no guaranteed service is provided to lo-criticality tasks. After a mode switch, only hi-criticality tasks are considered for execution while no guarantee is made to the lo-criticality tasks. However, with careful optimistic design, a certain degree of service guarantee can be provided to lo-criticality tasks upon a mode switch. This concept is broadly known as graceful degradation. Guaranteed graceful degradation provides a better quality of service as well as it utilizes the system resource more efficiently. In this thesis, we study two efficient techniques of graceful degradation. First, we study a mixed-criticality scheduling technique where graceful degradation is provided in the form of minimum cumulative completion rates. We present two easy-to-implement admission-control algorithms to determine which lo-criticality jobs to complete in hi mode. The scheduling is done by following deadline virtualization, and two heuristics are shown for virtual deadline settings. We further study the schedulability analysis and the backward mode switch conditions, which are proposed and proved in (Guo et al., 2018). Next, we present a probabilistic scheduling technique for mixed-criticality tasks on multiprocessor systems where a system-wide permitted failure probability is known. The schedulability conditions are derived along with the processor allocation scheme. The work is extended from (Guo et al., 2015), where the probabilistic model is first introduced for independent task scheduling on a uniprocessor platform. We further consider the failure dependency between tasks while scheduling on multiprocessor platforms. We provide related theoretical analysis to show the correctness of our work. To show the effectiveness of our proposed techniques, we conduct a detailed experimental evaluation under different circumstances”--Abstract, page iii

    Modular Avionics Software Integration on Multi-Core COTS : certification-Compliant Methodology and Timing Analysis Metrics for Legacy Software Reuse in Modern Aerospace Systems

    Get PDF
    Interference in multicores is undesirable for hard real-time systems and especially in the aerospace industry, for which it is mandatory to ensure beforehand timing predictability and deadlines enforcement in a system runtime behavior, in order to be granted acceptance by certification authorities. The goal of this thesis is to propose an approach for multi-core integration of legacy IMA software, without any hardware nor software modification, and which complies as much as possible to current, incremental certification and IMA key concepts such as robust time and space partitioning. The motivations of this thesis are to stick as much as possible to the current IMA software integration process in order to maximize the chances of acceptation by avionics industries of the contributions of this thesis, but also because the current process has long been proven efficient on aerospace systems currently in usage. Another motivation is to minimize the extra effort needed to provide certification authorities with timing-related verification information required when seeking approval. As a secondary goal depending on the possibilities, the contributions should offer design optimization features, and help reduce the time-to-market by automating some steps of the design and verification process. This thesis proposes two complete methodologies for IMA integration on multi-core COTS. Each of them offers different advantages and has different drawbacks, and therefore each of them may correspond to its own, complementary situations. One fits all avionics and certification requirements of incremental verification and robust partitioning and therefore fits up to DAL A applications, while the other offers maximum Size, Weight and Power (SWaP) optimization and fits either up to DAL C applications, multipartition applications or non-IMA applications. The methodologies are said to be "complete" because this thesis provides all necessary metrics to go through all steps of the software integration process. More specifically, this includes, for each strategy: - a static timing analysis for safely upper-bounding inter-core interference, and deriving the corresponding WCET upper-bounds for each task. - a Constraint Programming (CP) formulation for automated software/hardware allocation; the resulting allocation is correct by construction since the CP process embraces the proposed timing analysis mentioned earlier. - a CP formulation for automated schedule generation; the resulting schedule is correct by construction since the CP process embraces the proposed timing analysis mentioned earlier

    Automated Compilation Framework for Scratchpad-based Real-Time Systems

    Get PDF
    ScratchPad Memory (SPM) is highly adopted in real-time systems as it exhibits a predictable behaviour. SPM is software-managed by explicitly inserting instructions to move code and data transfers between the SPM and the main memory. However, it is a tedious job to decide how to manage the SPM and to manually modify the code to insert memory transfers. Hence, an automated compilation tool is essential to efficiently utilize the SPM. Another key problem with SPM is the latency suffered by the system due to memory transfers. Hiding this latency is important for high-performance systems. In this thesis, we address the problems of managing SPM and reducing the impact of memory latency. To realize the automation of our work, we develop a compilation framework based on the LLVM compiler to analyze and transform the program code. We exploit our framework to improve the performance of the execution of single and multi-tasks in real-time systems. For the single task execution, Worst-Case Execution Time (WCET) is of great importance to assure correct and safe behaviour of the system. So, we propose a WCET-driven allocation technique for data SPM that employs software prefetching to efficiently manage the SPM and to overlap the memory transfer and the task execution in a predictable way. On the other hand, multi-tasking requires the system to be schedulable such that all the tasks can meet their timing requirements. However, executing multiple tasks on a multi-processor platform suffers from the contention of the accesses to the shared main memory. To avoid the contention, several scheduling techniques adopted the 3-phase execution model which executes the task as a sequence of memory and computation phases. This provides the means to avoid the contention as well as to hide the memory latency by using a Direct Memory Access (DMA) engine. Executing memory transfers using the DMA allows overlapping the memory transfers with the computations on the processor. Using the 3-phase model in systems with limited sizes of local SPM may necessitate a segmentation of the task. Automating the segmentation process is necessary especially for systems with large task sets. Hence, we propose a set of efficient segmentation algorithms that follow the 3-phase execution model. The application of these algorithms shows a significant improvement in the system schedulability. For our segmentation algorithms to be more applicable, we extend the 3-phase model to allow programs with multiple paths represented as conditional Directed Acyclic Graphs (DAGs), unlike the previous works that targeted sequential programs. We also introduce a multi-steaming model to exploit the benefits of prefetching by overlapping the memory and computation phases of the same task, which was not allowed in the previous approaches. By combining the automated compilation with the proposed algorithms, we are able to achieve our goal to efficiently manage data SPM in real-time systems

    Control techniques for thermal-aware energy-efficient real time multiprocessor scheduling

    Get PDF
    La utilización de microprocesadores multinúcleo no sólo es atractiva para la industria sino que en muchos ámbitos es la única opción. La planificación tiempo real sobre estas plataformas es mucho más compleja que sobre monoprocesadores y en general empeoran el problema de sobre-diseño, llevando a la utilización de muchos más procesadores /núcleos de los necesarios. Se han propuesto algoritmos basados en planificación fluida que optimizan la utilización de los procesadores, pero hasta el momento presentan en general inconvenientes que los alejan de su aplicación práctica, no siendo el menor el elevado número de cambios de contexto y migraciones.Esta tesis parte de la hipótesis de que es posible diseñar algoritmos basados en planificación fluida, que optimizan la utilización de los procesadores, cumpliendo restricciones temporales, térmicas y energéticas, con un bajo número de cambios de contexto y migraciones, y compatibles tanto con la generación fuera de línea de ejecutivos cíclicos atractivos para la industria, como de planificadores que integran técnicas de control en tiempo de ejecución que permiten la gestión eficiente tanto de tareas aperiódicas como de desviaciones paramétricas o pequeñas perturbaciones.A este respecto, esta tesis contribuye con varias soluciones. En primer lugar, mejora una metodología de modelo que representa todas las dimensiones del problema bajo un único formalismo (Redes de Petri Continuas Temporizadas). En segundo lugar, propone un método de generación de un ejecutivo cíclico, calculado en ciclos de procesador, para un conjunto de tareas tiempo real duro sobre multiprocesadores que optimiza la utilización de los núcleos de procesamiento respetando también restricciones térmicas y de energía, sobre la base de una planificación fluida. Considerar la sobrecarga derivada del número de cambios de contexto y migraciones en un ejecutivo cíclico plantea un dilema de causalidad: el número de cambios de contexto (y en consecuencia su sobrecarga) no se conoce hasta generar el ejecutivo cíclico, pero dicho número no se puede minimizar hasta que se ha calculado. La tesis propone una solución a este dilema mediante un método iterativo de convergencia demostrada que logra minimizar la sobrecarga mencionada.En definitiva, la tesis consigue explotar la idea de planificación fluida para maximizar la utilización (donde maximizar la utilización es un gran problema en la industria) generando un sencillo ejecutivo cíclico de mínima sobrecarga (ya que la sobrecarga implica un gran problema de los planificadores basados en planificación fluida).Finalmente, se propone un método para utilizar las referencias de la planificación fuera de línea establecida en el ejecutivo cíclico para su seguimiento por parte de un controlador de frecuencia en línea, de modo que se pueden afrontar pequeñas perturbaciones y variaciones paramétricas, integrando la gestión de tareas aperiódicas (tiempo real blando) mientras se asegura la integridad de la ejecución del conjunto de tiempo real duro.Estas aportaciones constituyen una novedad en el campo, refrendada por las publicaciones derivadas de este trabajo de tesis.<br /
    corecore