54 research outputs found

    On the Pitfalls of Resource Augmentation Factors and Utilization Bounds in Real-Time Scheduling

    Get PDF
    In this paper, we take a careful look at speedup factors, utilization bounds, and capacity augmentation bounds. These three metrics have been widely adopted in real-time scheduling research as the de facto standard theoretical tools for assessing scheduling algorithms and schedulability tests. Despite that, it is not always clear how researchers and designers should interpret or use these metrics. In studying this area, we found a number of surprising results, and related to them, ways in which the metrics may be misinterpreted or misunderstood. In this paper, we provide a perspective on the use of these metrics, guiding researchers on their meaning and interpretation, and helping to avoid pitfalls in their use. Finally, we propose and demonstrate the use of parametric augmentation functions as a means of providing nuanced information that may be more relevant in practical settings

    On the Optimality of RM and EDF for Non-Preemptive Real-Time Harmonic Tasks

    Get PDF
    ABSTRACT In this paper, we study non-preemptive uniprocessor realtime scheduling using the non-preemptive RM (npRM) and EDF (npEDF) scheduling algorithms. We discuss the limitations of existing studies, identifying pessimism in current schedulability analysis and inefficiencies in existing processor speedup results. Focusing on harmonic task sets, we show that even with restrictions placed on the execution times of the tasks, npRM and npEDF are not able to schedule all feasible task sets. We obtain necessary conditions for the feasibility of the harmonic tasks with arbitrary integer period ratios. Then we derive sufficient conditions for the schedulability of npRM and npEDF upon harmonic task sets. Based on these conditions, a superior speedup factor which guarantees the schedulability in cases where there are fewer restrictions on the execution times is derived. Results from simulation experiments show an average speedup factor three times less than the only existing feasible method to obtain speedup factor

    Overhead-Aware Compositional Analysis of Real-Time Systems

    Get PDF
    Over the past decade, interface-based compositional schedulability analysis has emerged as an effective method for guaranteeing real-time properties in complex systems. Several interfaces and interface computation methods have been developed, and they offer a range of tradeoffs between the complexity and the accuracy of the analysis. However, none of the existing methods consider platform overheads in the component interfaces. As a result, although the analysis results are sound in theory, the systems may violate their timing constraints when running on realistic platforms. This is due to various overheads, such as task release delays, interrupts, cache effects, and context switches. Simple solutions, such as increasing the interface budget or the tasks’ worst-case execution times by a fixed amount, are either unsafe (because of the overhead accumulation problem) or they waste a lot of resources. In this paper, we present an overhead-aware compositional analysis technique that can account for platform overheads in the representation and computation of component interfaces. Our technique extends previous overhead accounting methods, but it additionally addresses the new challenges that are specific to the compositional scheduling setting. To demonstrate that our technique is practical, we report results from an extensive evaluation on a realistic platform

    Mixed-Criticality Scheduling on Multiprocessors using Task Grouping

    Get PDF
    Real-time systems are increasingly running a mix of tasks with different criticality levels: for instance, unmanned aerial vehicle has multiple software functions with different safety criticality levels, but runs them on a single, shared computational platform. In addition, these systems are increasingly deployed on multiprocessor platforms because this can help to reduce their cost, space, weight, and power consumption. To assure the safety of such systems, several mixed-criticality scheduling algorithms have been developed that can provide mixed-criticality timing guarantees. However, most existing algorithms have two important limitations: they do not guarantee strong isolation among the high-criticality tasks, and they offer poor real-time performance for the low-criticality tasks

    Real-Time Virtualization and Cloud Computing

    Get PDF
    In recent years, we have observed three major trends in the development of complex real-time embedded systems. First, to reduce cost and enhance flexibility, multiple systems are sharing common computing platforms via virtualization technology, instead of being deployed separately on physically isolated hosts. Second, multi-core processors are increasingly being used in real-time systems. Third, developers are exploring the possibilities of deploying real-time applications as virtual machines in a public cloud. The integration of real-time systems as virtual machines (VMs) atop common multi-core platforms in a public cloud raises significant new research challenges in meeting the real-time latency requirements of applications. In order to address the challenges of running real-time VMs in the cloud, we first present RT-Xen, a novel real-time scheduling framework within the popular Xen hypervisor. We start with single-core scheduling in RT-Xen, and present the first work that empirically studies and compares different real-time scheduling schemes on a same platform. We then introduce RT-Xen 2.0, which focuses on multi-core scheduling and spanning multiple design spaces, including priority schemes, server schemes, and scheduling policies. Experimental results demonstrate that when combined with compositional scheduling theory, RT-Xen can deliver real-time performance to an application running in a VM, while the default credit scheduler cannot. After that, we present RT-OpenStack, a cloud management system designed to support co-hosting real-time and non-real-time VMs in a cloud. RT-OpenStack studies the problem of running real-time VMs together with non-real-time VMs in a public cloud. Leveraging the resource interface and real-time scheduling provided by RT-Xen, RT-OpenStack provides real-time performance guarantees to real-time VMs, while achieving high resource utilization by allowing non-real-time VMs to share the remaining CPU resources through a novel VM-to-host mapping scheme. Finally, we present RTCA, a real-time communication architecture for VMs sharing a same host, which maintains low latency for high priority inter-domain communication (IDC) traffic in the face of low priority IDC traffic

    The Design, Analysis, & Application Of Multi-Modal Real-Time Embedded Systems

    Get PDF
    For many hand-held computing devices (e.g., smartphones), multiple operational modes are preferred because of their flexibility. In addition to their designated purposes, some of these devices provide a platform for different types of services, which include rendering of high-quality multimedia. Upon such devices, temporal isolation among co-executing applications is very important to ensure that each application receives an acceptable level of quality-of-service. In order to provide strong guarantees on services, multimedia applications and real-time control systems maintain timing constraints in the form of deadlines for recurring tasks. A flexible real-time multi-modal system will ideally provide system designers the option to change both resource-level modes and application-level modes. Existing schedulability analysis for a real-time multi-modal system (MMS) with software/hardware modes are computationally intractable. In addition, a fast schedulability analysis is desirable in a design-space exploration that determines the best parameters of a multi-modal system. The thesis of this dissertation is: The determination of resource parameters with guaranteed schedulability for real-time systems that may change computational requirements over time is expensive in terms of runtime. However, decoupling schedulability analysis from determining the minimum processing resource parameters of a real-time multi-modal system results in pseudo-polynomial complexity for the combined goals of determining MMS schedulability and optimal resource parameters. Effective schedulability analysis and optimized resource usages are essential for an MMS that may co-execute with other applications to reduce size and cost of an embedded system. Traditional real-time systems research has addressed the issue of schedulability under mode-changes and temporal isolation separately and independently. For instance, schedulability analysis of real-time multi-mode systems has commonly assumed that the system is executing upon a dedicated platform. On the other hand, research on temporal isolation in real-time scheduling has often assumed that the application and resource requirements of each subsystem are fixed during runtime. Only recently researchers have started to address the problem of guaranteeing hard deadlines of temporally-isolated subsystems for multi-modal systems. However, most of this research suffers two fundamental drawbacks: 1) full support for resource and application level mode-changes does not exist, and/or 2) determining schedulability for such systems has exponentialtime complexity. As a result, current literature cannot guarantee optimal resource usages for multi-modal systems. In this dissertation, we address the two fundamental drawbacks by providing a theoretical framework and associate tractable schedulability analysis for hard-real-time multi-modal subsystems. Then, by leveraging the schedulability analysis, we address the problem of optimizing a multi-modal system with respect to resource usages. To accelerate the schedulability analysis, we develop a parallel algorithm using message passing interface (MPI) to check the invariants of the schedulable real-time MMS. This parallel algorithm significantly improves the execution time for checking the schedulability (e.g., our parallel algorithm requires only approximately 45 minutes to analyze a 16-mode system upon 8 cores, whereas the analysis takes 9 hours when executed on a single core). However, even this reduction is still expensive for techniques such as design-space exploration (DSE) that repeatedly applies schedulability analysis to determine the optimal system resource parameters. Today\u27s massively parallel GPU platforms can be a cost-effective alternative for scaling the number of computer nodes and further reducing the computation time. An efficient GPU-based schedulability analysis can also be used online to reconfigure the system by re-evaluating schedulability if parameters change dynamically. In this dissertation, we also extend our parallel schedulability analysis algorithm for a GPU. Finally, we performed a case-study of radar-assisted cruise control system to show the usability of multi-modal system which consists of fixed priority non-preemptive tasks

    The multiprocessor real-time scheduling of general task systems

    Get PDF
    The recent emergence of multicore and related technologies in many commercial systems has increased the prevalence of multiprocessor architectures. Contemporaneously, real-time applications have become more complex and sophisticated in their behavior and interaction. Inevitably, these complex real-time applications will be deployed upon these multiprocessor platforms and require temporal analysis techniques to verify their correctness. However, most prior research in multiprocessor real-time scheduling has addressed the temporal analysis only of Liu and Layland task systems. The goal of this dissertation is to extend real-time scheduling theory for multiprocessor systems by developing temporal analysis techniques for more general task models such as the sporadic task model, the generalized multiframe task model, and the recurring real-time task model. The thesis of this dissertation is: Optimal online multiprocessor real-time scheduling algorithms for sporadic and more general task systems are impossible; however, efficient, online scheduling algorithms and associated feasibility and schedulability tests, with provably bounded deviation from any optimal test, exist. To support our thesis, this dissertation develops feasibility and schedulability tests for various multiprocessor scheduling paradigms. We consider three classes of multiprocessor scheduling based on whether a real-time job may migrate between processors: full-migration, restricted-migration, and partitioned. For all general task systems, we obtain feasibility tests for arbitrary real-time instances under the full-and restricted-migration paradigms. Despite the existence of tests for feasibility, we show that optimal online scheduling of sporadic and more general systems is impossible. Therefore, we focus on scheduling algorithms that have constant-factor approximation ratios in terms of an analysis technique known as resource augmentation. We develop schedulability tests for scheduling algorithms, earliest-deadline-first (edf) and deadline-monotonic (dm), under full-migration and partitioned scheduling paradigms. Feasibility and schedulability tests presented in this dissertation use the workload metrics of demand-based load and maximum job density and have provably bounded deviation from optimal in terms of resource augmentation. We show the demand-based load and maximum job density metrics may be exactly computed in pseudo-polynomial time for general task systems and approximated in polynomial time for sporadic task systems

    Real-time operating system support for multicore applications

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Automação e Sistemas, Florianópolis, 2014Plataformas multiprocessadas atuais possuem diversos níveis da memória cache entre o processador e a memória principal para esconder a latência da hierarquia de memória. O principal objetivo da hierarquia de memória é melhorar o tempo médio de execução, ao custo da previsibilidade. O uso não controlado da hierarquia da cache pelas tarefas de tempo real impacta a estimativa dos seus piores tempos de execução, especialmente quando as tarefas de tempo real acessam os níveis da cache compartilhados. Tal acesso causa uma disputa pelas linhas da cache compartilhadas e aumenta o tempo de execução das aplicações. Além disso, essa disputa na cache compartilhada pode causar a perda de prazos, o que é intolerável em sistemas de tempo real críticos. O particionamento da memória cache compartilhada é uma técnica bastante utilizada em sistemas de tempo real multiprocessados para isolar as tarefas e melhorar a previsibilidade do sistema. Atualmente, os estudos que avaliam o particionamento da memória cache em multiprocessadores carecem de dois pontos fundamentais. Primeiro, o mecanismo de particionamento da cache é tipicamente implementado em um ambiente simulado ou em um sistema operacional de propósito geral. Consequentemente, o impacto das atividades realizados pelo núcleo do sistema operacional, tais como o tratamento de interrupções e troca de contexto, no particionamento das tarefas tende a ser negligenciado. Segundo, a avaliação é restrita a um escalonador global ou particionado, e assim não comparando o desempenho do particionamento da cache em diferentes estratégias de escalonamento. Ademais, trabalhos recentes confirmaram que aspectos da implementação do SO, tal como a estrutura de dados usada no escalonamento e os mecanismos de tratamento de interrupções, impactam a escalonabilidade das tarefas de tempo real tanto quanto os aspectos teóricos. Entretanto, tais estudos também usaram sistemas operacionais de propósito geral com extensões de tempo real, que afetamos sobre custos de tempo de execução observados e a escalonabilidade das tarefas de tempo real. Adicionalmente, os algoritmos de escalonamento tempo real para multiprocessadores atuais não consideram cenários onde tarefas de tempo real acessam as mesmas linhas da cache, o que dificulta a estimativa do pior tempo de execução. Esta pesquisa aborda os problemas supracitados com as estratégias de particionamento da cache e com os algoritmos de escalonamento tempo real multiprocessados da seguinte forma. Primeiro, uma infraestrutura de tempo real para multiprocessadores é projetada e implementada em um sistema operacional embarcado. A infraestrutura consiste em diversos algoritmos de escalonamento tempo real, tais como o EDF global e particionado, e um mecanismo de particionamento da cache usando a técnica de coloração de páginas. Segundo, é apresentada uma comparação em termos da taxa de escalonabilidade considerando o sobre custo de tempo de execução da infraestrutura criada e de um sistema operacional de propósito geral com extensões de tempo real. Em alguns casos, o EDF global considerando o sobre custo do sistema operacional embarcado possui uma melhor taxa de escalonabilidade do que o EDF particionado com o sobre custo do sistema operacional de propósito geral, mostrando claramente como diferentes sistemas operacionais influenciam os escalonadores de tempo real críticos em multiprocessadores. Terceiro, é realizada uma avaliação do impacto do particionamento da memória cache em diversos escalonadores de tempo real multiprocessados. Os resultados desta avaliação indicam que um sistema operacional "leve" não compromete as garantias de tempo real e que o particionamento da cache tem diferentes comportamentos dependendo do escalonador e do tamanho do conjunto de trabalho das tarefas. Quarto, é proposto um algoritmo de particionamento de tarefas que atribui as tarefas que compartilham partições ao mesmo processador. Os resultados mostram que essa técnica de particionamento de tarefas reduz a disputa pelas linhas da cache compartilhadas e provê garantias de tempo real para sistemas críticos. Finalmente, é proposto um escalonador de tempo real de duas fases para multiprocessadores. O escalonador usa informações coletadas durante o tempo de execução das tarefas através dos contadores de desempenho em hardware. Com base nos valores dos contadores, o escalonador detecta quando tarefas de melhor esforço o interferem com tarefas de tempo real na cache. Assim é possível impedir que tarefas de melhor esforço acessem as mesmas linhas da cache que tarefas de tempo real. O resultado desta estratégia de escalonamento é o atendimento dos prazos críticos e não críticos das tarefas de tempo real.Abstracts: Modern multicore platforms feature multiple levels of cache memory placed between the processor and main memory to hide the latency of ordinary memory systems. The primary goal of this cache hierarchy is to improve average execution time (at the cost of predictability). The uncontrolled use of the cache hierarchy by realtime tasks may impact the estimation of their worst-case execution times (WCET), specially when real-time tasks access a shared cache level, causing a contention for shared cache lines and increasing the application execution time. This contention in the shared cache may leadto deadline losses, which is intolerable particularly for hard real-time (HRT) systems. Shared cache partitioning is a well-known technique used in multicore real-time systems to isolate task workloads and to improve system predictability. Presently, the state-of-the-art studies that evaluate shared cache partitioning on multicore processors lack two key issues. First, the cache partitioning mechanism is typically implemented either in a simulated environment or in a general-purpose OS (GPOS), and so the impact of kernel activities, such as interrupt handlers and context switching, on the task partitions tend to be overlooked. Second, the evaluation is typically restricted to either a global or partitioned scheduler, thereby by falling to compare the performance of cache partitioning when tasks are scheduled by different schedulers. Furthermore, recent works have confirmed that OS implementation aspects, such as the choice of scheduling data structures and interrupt handling mechanisms, impact real-time schedulability as much as scheduling theoretic aspects. However, these studies also used real-time patches applied into GPOSes, which affects the run-time overhead observed in these works and consequently the schedulability of real-time tasks. Additionally, current multicore scheduling algorithms do not consider scenarios where real-time tasks access the same cache lines due to true or false sharing, which also impacts the WCET. This thesis addresses these aforementioned problems with cache partitioning techniques and multicore real-time scheduling algorithms as following. First, a real-time multicore support is designed and implemented on top of an embedded operating system designed from scratch. This support consists of several multicore real-time scheduling algorithms, such as global and partitioned EDF, and a cache partitioning mechanism based on page coloring. Second, it is presented a comparison in terms of schedulability ratio considering the run-time overhead of the implemented RTOS and a GPOS patched with real-time extensions. In some cases, Global-EDF considering the overhead of the RTOS is superior to Partitioned-EDF considering the overhead of the patched GPOS, which clearly shows how different OSs impact hard realtime schedulers. Third, an evaluation of the cache partitioning impacton partitioned, clustered, and global real-time schedulers is performed.The results indicate that a lightweight RTOS does not impact real-time tasks, and shared cache partitioning has different behavior depending on the scheduler and the task's working set size. Fourth, a task partitioning algorithm that assigns tasks to cores respecting their usage of cache partitions is proposed. The results show that by simply assigning tasks that shared cache partitions to the same processor, it is possible to reduce the contention for shared cache lines and to provideHRT guarantees. Finally, a two-phase multicore scheduler that provides HRT and soft real-time (SRT) guarantees is proposed. It is shown that by using information from hardware performance counters at run-time, the RTOS can detect when best-effort tasks interfere with real-time tasks in the shared cache. Then, the RTOS can prevent best effort tasks from interfering with real-time tasks. The results also show that the assignment of exclusive partitions to HRT tasks together with the two-phase multicore scheduler provides HRT and SRT guarantees, even when best-effort tasks share partitions with real-time tasks

    Strategic and operational services for workload management in the cloud

    Full text link
    In hosting environments such as Infrastructure as a Service (IaaS) clouds, desirable application performance is typically guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be allocated by a service provider for unencumbered use by customers to ensure proper operation of their workloads. Most IaaS offerings are presented to customers as fixed-size and fixed-price SLAs, that do not match well the needs of specific applications. Furthermore, arbitrary colocation of applications with different SLAs may result in inefficient utilization of hosts' resources, resulting in economically undesirable customer behavior. In this thesis, we propose the design and architecture of a Colocation as a Service (CaaS) framework: a set of strategic and operational services that allow the efficient colocation of customer workloads. CaaS strategic services provide customers the means to specify their application workload using an SLA language that provides them the opportunity and incentive to take advantage of any tolerances they may have regarding the scheduling of their workloads. CaaS operational services provide the information necessary for, and carry out the reconfigurations mandated by strategic services. We recognize that it could be the case that there are multiple, yet functionally equivalent ways to express an SLA. Thus, towards that end, we present a service that allows the provably-safe transformation of SLAs from one form to another for the purpose of achieving more efficient colocation. Our CaaS framework could be incorporated into an IaaS offering by providers or it could be implemented as a value added proposition by IaaS resellers. To establish the practicality of such offerings, we present a prototype implementation of our proposed CaaS framework

    WCET and Priority Assignment Analysis of Real-Time Systems using Search and Machine Learning

    Get PDF
    Real-time systems have become indispensable for human life as they are used in numerous industries, such as vehicles, medical devices, and satellite systems. These systems are very sensitive to violations of their time constraints (deadlines), which can have catastrophic consequences. To verify whether the systems meet their time constraints, engineers perform schedulability analysis from early stages and throughout development. However, there are challenges in obtaining precise results from schedulability analysis due to estimating the worst-case execution times (WCETs) and assigning optimal priorities to tasks. Estimating WCET is an important activity at early design stages of real-time systems. Based on such WCET estimates, engineers make design and implementation decisions to ensure that task executions always complete before their specified deadlines. However, in practice, engineers often cannot provide a precise point of WCET estimates and they prefer to provide plausible WCET ranges. Task priority assignment is an important decision, as it determines the order of task executions and it has a substantial impact on schedulability results. It thus requires finding optimal priority assignments so that tasks not only complete their execution but also maximize the safety margins from their deadlines. Optimal priority values increase the tolerance of real-time systems to unexpected overheads in task executions so that they can still meet their deadlines. However, it is a hard problem to find optimal priority assignments because their evaluation relies on uncertain WCET values and complex engineering constraints must be accounted for. This dissertation proposes three approaches to estimate WCET and assign optimal priorities at design stages. Combining a genetic algorithm and logistic regression, we first suggest an automatic approach to infer safe WCET ranges with a probabilistic guarantee based on the worst-case scheduling scenarios. We then introduce an extended approach to account for weakly hard real-time systems with an industrial schedule simulator. We evaluate our approaches by applying them to industrial systems from different domains and several synthetic systems. The results suggest that they are possible to estimate probabilistic safe WCET ranges efficiently and accurately so the deadline constraints are likely to be satisfied with a high degree of confidence. Moreover, we propose an automated technique that aims to identify the best possible priority assignments in real-time systems. The approach deals with multiple objectives regarding safety margins and engineering constraints using a coevolutionary algorithm. Evaluation with synthetic and industrial systems shows that the approach significantly outperforms both a baseline approach and solutions defined by practitioners. All the solutions in this dissertation scale to complex industrial systems for offline analysis within an acceptable time, i.e., at most 27 hours
    • …
    corecore