548 research outputs found

    Overhead Based Cluster Scheduling of Mixed Criticality Systems on Multicore Platform

    Get PDF
    The cluster-based technique is gaining focus for scheduling tasks of mixed-criticality (MC) real-time multicore systems. In this technique, the cores of the MC system are distributed in groups known as clusters. When all cores are distributed in clusters, the tasks are partitioned into clusters, which are scheduled on the cores within each cluster using a global approach. In this study, a cluster-based technique is adopted for scheduling tasks of real-time mixed-criticality systems (MCS). The Decreasing Criticality Decreasing Utilization with the worst-fit (DCDU-WF) technique is used for partitioning of tasks to clusters, whereas a novel mixed-criticality cluster-based boundary fair (MC-Bfair) scheduling approach is used for scheduling tasks on cores within clusters. The MC-Bfair scheduling algorithm reduces the number context switches and migration of tasks, which minimizes the overhead of mixed-criticality tasks. The migration and context switch overhead time is added at the time of each migration and context switch respectively for a task. In low critical mode, the low mode context switch and migration overhead time is added to task execution time, while the high mode overhead time of migration and context switch is added to the execution time of a task in high critical mode. The results obtained from experiments show the better schedulablity performance of proposed cluster-based technique as compared to cluster-based fixed priority (CB-FP), MC-EKG-VD-1, global and partitioned scheduling techniques e.g., for target utilization U=0.6, the proposed technique schedule 66.7% task sets while MC-EKG-VD-1, CB-FP, partitioned and global techniques schedule 50%, 33.3%, 16.7% and 0% task sets respectively

    Predictable migration and communication in the Quest-V multikernal

    Full text link
    Quest-V is a system we have been developing from the ground up, with objectives focusing on safety, predictability and efficiency. It is designed to work on emerging multicore processors with hardware virtualization support. Quest-V is implemented as a ``distributed system on a chip'' and comprises multiple sandbox kernels. Sandbox kernels are isolated from one another in separate regions of physical memory, having access to a subset of processing cores and I/O devices. This partitioning prevents system failures in one sandbox affecting the operation of other sandboxes. Shared memory channels managed by system monitors enable inter-sandbox communication. The distributed nature of Quest-V means each sandbox has a separate physical clock, with all event timings being managed by per-core local timers. Each sandbox is responsible for its own scheduling and I/O management, without requiring intervention of a hypervisor. In this paper, we formulate bounds on inter-sandbox communication in the absence of a global scheduler or global system clock. We also describe how address space migration between sandboxes can be guaranteed without violating service constraints. Experimental results on a working system show the conditions under which Quest-V performs real-time communication and migration.National Science Foundation (1117025

    Real-time operating system support for multicore applications

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Automação e Sistemas, Florianópolis, 2014Plataformas multiprocessadas atuais possuem diversos níveis da memória cache entre o processador e a memória principal para esconder a latência da hierarquia de memória. O principal objetivo da hierarquia de memória é melhorar o tempo médio de execução, ao custo da previsibilidade. O uso não controlado da hierarquia da cache pelas tarefas de tempo real impacta a estimativa dos seus piores tempos de execução, especialmente quando as tarefas de tempo real acessam os níveis da cache compartilhados. Tal acesso causa uma disputa pelas linhas da cache compartilhadas e aumenta o tempo de execução das aplicações. Além disso, essa disputa na cache compartilhada pode causar a perda de prazos, o que é intolerável em sistemas de tempo real críticos. O particionamento da memória cache compartilhada é uma técnica bastante utilizada em sistemas de tempo real multiprocessados para isolar as tarefas e melhorar a previsibilidade do sistema. Atualmente, os estudos que avaliam o particionamento da memória cache em multiprocessadores carecem de dois pontos fundamentais. Primeiro, o mecanismo de particionamento da cache é tipicamente implementado em um ambiente simulado ou em um sistema operacional de propósito geral. Consequentemente, o impacto das atividades realizados pelo núcleo do sistema operacional, tais como o tratamento de interrupções e troca de contexto, no particionamento das tarefas tende a ser negligenciado. Segundo, a avaliação é restrita a um escalonador global ou particionado, e assim não comparando o desempenho do particionamento da cache em diferentes estratégias de escalonamento. Ademais, trabalhos recentes confirmaram que aspectos da implementação do SO, tal como a estrutura de dados usada no escalonamento e os mecanismos de tratamento de interrupções, impactam a escalonabilidade das tarefas de tempo real tanto quanto os aspectos teóricos. Entretanto, tais estudos também usaram sistemas operacionais de propósito geral com extensões de tempo real, que afetamos sobre custos de tempo de execução observados e a escalonabilidade das tarefas de tempo real. Adicionalmente, os algoritmos de escalonamento tempo real para multiprocessadores atuais não consideram cenários onde tarefas de tempo real acessam as mesmas linhas da cache, o que dificulta a estimativa do pior tempo de execução. Esta pesquisa aborda os problemas supracitados com as estratégias de particionamento da cache e com os algoritmos de escalonamento tempo real multiprocessados da seguinte forma. Primeiro, uma infraestrutura de tempo real para multiprocessadores é projetada e implementada em um sistema operacional embarcado. A infraestrutura consiste em diversos algoritmos de escalonamento tempo real, tais como o EDF global e particionado, e um mecanismo de particionamento da cache usando a técnica de coloração de páginas. Segundo, é apresentada uma comparação em termos da taxa de escalonabilidade considerando o sobre custo de tempo de execução da infraestrutura criada e de um sistema operacional de propósito geral com extensões de tempo real. Em alguns casos, o EDF global considerando o sobre custo do sistema operacional embarcado possui uma melhor taxa de escalonabilidade do que o EDF particionado com o sobre custo do sistema operacional de propósito geral, mostrando claramente como diferentes sistemas operacionais influenciam os escalonadores de tempo real críticos em multiprocessadores. Terceiro, é realizada uma avaliação do impacto do particionamento da memória cache em diversos escalonadores de tempo real multiprocessados. Os resultados desta avaliação indicam que um sistema operacional "leve" não compromete as garantias de tempo real e que o particionamento da cache tem diferentes comportamentos dependendo do escalonador e do tamanho do conjunto de trabalho das tarefas. Quarto, é proposto um algoritmo de particionamento de tarefas que atribui as tarefas que compartilham partições ao mesmo processador. Os resultados mostram que essa técnica de particionamento de tarefas reduz a disputa pelas linhas da cache compartilhadas e provê garantias de tempo real para sistemas críticos. Finalmente, é proposto um escalonador de tempo real de duas fases para multiprocessadores. O escalonador usa informações coletadas durante o tempo de execução das tarefas através dos contadores de desempenho em hardware. Com base nos valores dos contadores, o escalonador detecta quando tarefas de melhor esforço o interferem com tarefas de tempo real na cache. Assim é possível impedir que tarefas de melhor esforço acessem as mesmas linhas da cache que tarefas de tempo real. O resultado desta estratégia de escalonamento é o atendimento dos prazos críticos e não críticos das tarefas de tempo real.Abstracts: Modern multicore platforms feature multiple levels of cache memory placed between the processor and main memory to hide the latency of ordinary memory systems. The primary goal of this cache hierarchy is to improve average execution time (at the cost of predictability). The uncontrolled use of the cache hierarchy by realtime tasks may impact the estimation of their worst-case execution times (WCET), specially when real-time tasks access a shared cache level, causing a contention for shared cache lines and increasing the application execution time. This contention in the shared cache may leadto deadline losses, which is intolerable particularly for hard real-time (HRT) systems. Shared cache partitioning is a well-known technique used in multicore real-time systems to isolate task workloads and to improve system predictability. Presently, the state-of-the-art studies that evaluate shared cache partitioning on multicore processors lack two key issues. First, the cache partitioning mechanism is typically implemented either in a simulated environment or in a general-purpose OS (GPOS), and so the impact of kernel activities, such as interrupt handlers and context switching, on the task partitions tend to be overlooked. Second, the evaluation is typically restricted to either a global or partitioned scheduler, thereby by falling to compare the performance of cache partitioning when tasks are scheduled by different schedulers. Furthermore, recent works have confirmed that OS implementation aspects, such as the choice of scheduling data structures and interrupt handling mechanisms, impact real-time schedulability as much as scheduling theoretic aspects. However, these studies also used real-time patches applied into GPOSes, which affects the run-time overhead observed in these works and consequently the schedulability of real-time tasks. Additionally, current multicore scheduling algorithms do not consider scenarios where real-time tasks access the same cache lines due to true or false sharing, which also impacts the WCET. This thesis addresses these aforementioned problems with cache partitioning techniques and multicore real-time scheduling algorithms as following. First, a real-time multicore support is designed and implemented on top of an embedded operating system designed from scratch. This support consists of several multicore real-time scheduling algorithms, such as global and partitioned EDF, and a cache partitioning mechanism based on page coloring. Second, it is presented a comparison in terms of schedulability ratio considering the run-time overhead of the implemented RTOS and a GPOS patched with real-time extensions. In some cases, Global-EDF considering the overhead of the RTOS is superior to Partitioned-EDF considering the overhead of the patched GPOS, which clearly shows how different OSs impact hard realtime schedulers. Third, an evaluation of the cache partitioning impacton partitioned, clustered, and global real-time schedulers is performed.The results indicate that a lightweight RTOS does not impact real-time tasks, and shared cache partitioning has different behavior depending on the scheduler and the task's working set size. Fourth, a task partitioning algorithm that assigns tasks to cores respecting their usage of cache partitions is proposed. The results show that by simply assigning tasks that shared cache partitions to the same processor, it is possible to reduce the contention for shared cache lines and to provideHRT guarantees. Finally, a two-phase multicore scheduler that provides HRT and soft real-time (SRT) guarantees is proposed. It is shown that by using information from hardware performance counters at run-time, the RTOS can detect when best-effort tasks interfere with real-time tasks in the shared cache. Then, the RTOS can prevent best effort tasks from interfering with real-time tasks. The results also show that the assignment of exclusive partitions to HRT tasks together with the two-phase multicore scheduler provides HRT and SRT guarantees, even when best-effort tasks share partitions with real-time tasks

    Study, analysis and new scheduling proposals in partitioned real-time systems

    Full text link
    [ES] En nuestra vida cotidiana, cada vez más ordenadores controlan nuestro entorno: teléfonos móviles, procesos industriales, asistencia a la conducción, etc. Todos estos sistemas presentan requisitos estrictos para garantizar un comportamiento adecuado. En muchos de estos sistemas, cumplir con las restricciones de tiempo es un factor tan importante como el resultado lógico de los cálculos. Desde hace aproximadamente 40 años, los sistemas en tiempo real son muy atractivos en el campo de la computación y hoy en día se aplican en áreas de gran alcance como aplicaciones industriales, aplicaciones aeroespaciales, telecomunicaciones, electrónica de consumo, etc. Algunos retos a abordar en el campo del tiempo real son el determinismo y la predecibilidad del comportamiento temporal del sistema. En este sentido, garantizar la ejecución del programa y los tiempos de respuesta del sistema son requisitos esenciales que deben cumplirse estrictamente a través de estrategias apropiadas de planificación de tareas. Además, las arquitecturas multiprocesador se están volviendo más populares debido al hecho de que las capacidades de procesamiento y los recursos computacionales de los sistemas están aumentando. Un estudio reciente estima que existe una tendencia creciente entre las arquitecturas multiprocesador a combinar diferentes niveles de criticidad en el mismo sistema. En este sentido, proporcionar aislamiento entre las aplicaciones es extremadamente necesario. La tecnología particionada es capaz de lidiar con este propósito. Además, la gestión de la energía es un problema relevante en los sistemas en tiempo real. Muchos sistemas empotrados de tiempo real, como dispositivos portátiles o robots móviles que requieren baterías, buscan encontrar técnicas que reduzcan el consumo de energía y, como consecuencia, aumenten la vida útil de sus baterías. También se obtienen claros beneficios operativos, financieros, monetarios y ambientales al minimizar el consumo de energía. Con todo ello, este trabajo aborda el problema de planificabilidad y contribuye al estudio de las nuevas técnicas de planificación en sistemas particionados de tiempo real. Estas técnicas proporcionan el tiempo mínimo para planificar de manera factible conjuntos de tareas. Además, se proponen técnicas de asignación para sistemas multiprocesador cuyo objetivo principal es reducir el consumo de energía del sistema global. Finalmente, se presentan los resultados obtenidos así como los trabajos futuros relacionados con este trabajo[CA] En la nostra vida quotidiana, cada vegada més ordenadors controlen el nostre entorn: telèfons mòbils, processos industrials, assistència a la conducció, etc. Tots aquests sistemes presenten requisits estrictes per a garantir un comportament adequat. En molts d' aquests sistemes, complir amb les restriccions de temps és un factor tan important com el resultat lògic dels càlculs. Des de fa aproximadament 40 anys, els sistemes en temps real són molt atractius en el camp de la computació i hui dia s' apliquen en àrees de gran abast com a aplicacions industrials, aplicacions aeroespacials, telecomunicacions, electrònica de consum, etc. Alguns reptes a abordar en el camp del temps real són el determinisme i la predictibilitat del comportament temporal del sistema. En aquest sentit, garantir l'execució del programa i els temps de resposta del sistema són requisits essencials que han de complir-se estrictament a través d'estratègies apropiades de planificació de tasques. A més, les arquitectures multiprocessador s'estan tornant més populars a causa del fet que les capacitats de processament i els recursos computacionals dels sistemes estan augmentant. Un estudi recent estima que existeix una tendència creixent entre les arquitectures multiprocessador a combinar diferents nivells de criticitat en el mateix sistema. En aquest sentit, proporcionar aïllament entre les aplicacions és extremadament necessari. La tecnologia particionada és capaç de bregar amb aquest propòsit. A més, la gestió de l'energia és un problema rellevant en els sistemes en temps real. Molts sistemes embebits de temps real, com a dispositius portàtils o robots mòbils que requereixen bateries, busquen trobar tècniques que reduïsquen el consum d'energia i, com a conseqüència, augmenten la vida útil de les seues bateries. També s'obtenen clars beneficis operatius, financers, monetaris i ambientals en minimitzar el consum d'energia. Amb tot això, aquest treball aborda el problema de planificabilitat i contribueix a l'estudi de les noves tècniques de planificació en sistemes particionats de temps real. Aquestes tècniques proporcionen el temps mínim per a planificar de manera factible conjunts de tasques. A més, es proposen tècniques d'assignació per a sistemes multiprocessador l'objectiu principal del qual és reduir el consum d'energia del sistema global. Finalment, es presenten els resultats obtinguts així com els treballs futurs relacionats amb aquest treball.[EN] In our everyday lives, more and more computers are controlling our environment: mobile phones, industrial processes, driving assistance, etc. All these systems present strict requirements to ensure proper behaviour. In many of these systems, the time at which the action is delivered is as important as the logical result of the computation. About 40 years ago, real-time systems began to attract attention in computing field and nowadays are applied in wide ranging areas as industrial applications, aerospace, telecommunication applications, consumer electronics, etc. Some real-time challenges that must be addressed are determinism and predictability of the temporal behaviour of the system. In this sense, to guarantee program execution and system response times are essential requirements that must be strictly met through appropriate task scheduling strategies. Furthermore, multiprocessor architectures are becoming more popular due to the fact that processing capabilities and computational resources are increasing. A recent study estimates that there is an increasing tendency among multiprocessor architectures to combine different levels of criticality in the same system. In this sense, to provide isolation between applications is extremely required. Partitioned technology is able to deal with this purpose. In addition, energy management is a relevant problem in real-time systems. Many real-time embedded systems, as wearable devices or mobile robots that require batteries, seek to find techniques that reduce the energy consumption and, as a consequence, increase the lifetime of their batteries. Also clear operational, financial, monetary and environmental gains are reached when minimizing energy consumption. Faced with all this, this work addresses the problem of schedulability and contributes to the study of new scheduling techniques in partitioned real-time systems. These techniques provide the minimum time to feasible schedule tasks sets. Moreover, allocation techniques for multicore systems whose main objective is to reduce the energy consumption of the overall system are also proposed. Finally, some of the obtained results are discussed as conclusions and future works are introduced.Guasque Ortega, A. (2019). Study, analysis and new scheduling proposals in partitioned real-time systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/135279TESI

    Adaptive Mid-term and Short-term Scheduling of Mixed-criticality Systems

    Get PDF
    A mixed-criticality real-time system is a real-time system having multiple tasks classified according to their criticality. Research on mixed-criticality systems started to provide an effective and cost efficient a priori verification process for safety critical systems. The higher the criticality of a task within a system and the more the system should guarantee the required level of service for it. However, such model poses new challenges with respect to scheduling and fault tolerance within real-time systems. Currently, mixed-criticality scheduling protocols severely degrade lower criticality tasks in case of resource shortage to provide the required level of service for the most critical ones. The actual research challenge in this field is to devise robust scheduling protocols to minimise the impact on less critical tasks. This dissertation introduces two approaches, one short-term and the other medium-term, to appropriately allocate computing resources to tasks within mixed-criticality systems both on uniprocessor and multiprocessor systems. The short-term strategy consists of a protocol named Lazy Bailout Protocol (LBP) to schedule mixed-criticality task sets on single core architectures. Scheduling decisions are made about tasks that are active in the ready queue and that have to be dispatched to the CPU. LBP minimises the service degradation for lower criticality tasks by providing to them a background execution during the system idle time. After, I refined LBP with variants that aim to further increase the service level provided for lower criticality tasks. However, this is achieved at an increased cost of either system offline analysis or complexity at runtime. The second approach, named Adaptive Tolerance-based Mixed-criticality Protocol (ATMP), decides at runtime which task has to be allocated to the active cores according to the available resources. ATMP permits to optimise the overall system utility by tuning the system workload in case of shortage of computing capacity at runtime. Unlike the majority of current mixed-criticality approaches, ATMP allows to smoothly degrade also higher criticality tasks to keep allocated lower criticality ones
    • …
    corecore