20 research outputs found

    Dynamic Interference-Sensitive Run-time Adaptation of Time-Triggered Schedules

    Get PDF
    Over-approximated Worst-Case Execution Time (WCET) estimations for multi-cores lead to safe, but over-provisioned, systems and underutilized cores. To reduce WCET pessimism, interference-sensitive WCET (isWCET) estimations are used. Although they provide tighter WCET bounds, they are valid only for a specific schedule solution. Existing approaches have to maintain this isWCET schedule solution at run-time, via time-triggered execution, in order to be safe. Hence, any earlier execution of tasks, enabled by adapting the isWCET schedule solution, is not possible. In this paper, we present a dynamic approach that safely adapts isWCET schedules during execution, by relaxing or completely removing isWCET schedule dependencies, depending on the progress of each core. In this way, an earlier task execution is enabled, creating time slack that can be used by safety-critical and mixed-criticality systems to provide higher Quality-of-Services or execute other best-effort applications. The Response-Time Analysis (RTA) of the proposed approach is presented, showing that although the approach is dynamic, it is fully predictable with bounded WCET. To support our contribution, we evaluate the behavior and the scalability of the proposed approach for different application types and execution configurations on the 8-core Texas Instruments TMS320C6678 platform, obtaining significant performance improvements compared to static approaches

    A time-predictable many-core processor design for critical real-time embedded systems

    Get PDF
    Critical Real-Time Embedded Systems (CRTES) are in charge of controlling fundamental parts of embedded system, e.g. energy harvesting solar panels in satellites, steering and breaking in cars, or flight management systems in airplanes. To do so, CRTES require strong evidence of correct functional and timing behavior. The former guarantees that the system operates correctly in response of its inputs; the latter ensures that its operations are performed within a predefined time budget. CRTES aim at increasing the number and complexity of functions. Examples include the incorporation of \smarter" Advanced Driver Assistance System (ADAS) functionality in modern cars or advanced collision avoidance systems in Unmanned Aerial Vehicles (UAVs). All these new features, implemented in software, lead to an exponential growth in both performance requirements and software development complexity. Furthermore, there is a strong need to integrate multiple functions into the same computing platform to reduce the number of processing units, mass and space requirements, etc. Overall, there is a clear need to increase the computing power of current CRTES in order to support new sophisticated and complex functionality, and integrate multiple systems into a single platform. The use of multi- and many-core processor architectures is increasingly seen in the CRTES industry as the solution to cope with the performance demand and cost constraints of future CRTES. Many-cores supply higher performance by exploiting the parallelism of applications while providing a better performance per watt as cores are maintained simpler with respect to complex single-core processors. Moreover, the parallelization capabilities allow scheduling multiple functions into the same processor, maximizing the hardware utilization. However, the use of multi- and many-cores in CRTES also brings a number of challenges related to provide evidence about the correct operation of the system, especially in the timing domain. Hence, despite the advantages of many-cores and the fact that they are nowadays a reality in the embedded domain (e.g. Kalray MPPA, Freescale NXP P4080, TI Keystone II), their use in CRTES still requires finding efficient ways of providing reliable evidence about the correct operation of the system. This thesis investigates the use of many-core processors in CRTES as a means to satisfy performance demands of future complex applications while providing the necessary timing guarantees. To do so, this thesis contributes to advance the state-of-the-art towards the exploitation of parallel capabilities of many-cores in CRTES contributing in two different computing domains. From the hardware domain, this thesis proposes new many-core designs that enable deriving reliable and tight timing guarantees. From the software domain, we present efficient scheduling and timing analysis techniques to exploit the parallelization capabilities of many-core architectures and to derive tight and trustworthy Worst-Case Execution Time (WCET) estimates of CRTES.Los sistemas críticos empotrados de tiempo real (en ingles Critical Real-Time Embedded Systems, CRTES) se encargan de controlar partes fundamentales de los sistemas integrados, e.g. obtención de la energía de los paneles solares en satélites, la dirección y frenado en automóviles, o el control de vuelo en aviones. Para hacerlo, CRTES requieren fuerte evidencias del correcto comportamiento funcional y temporal. El primero garantiza que el sistema funciona correctamente en respuesta de sus entradas; el último asegura que sus operaciones se realizan dentro de unos limites temporales establecidos previamente. El objetivo de los CRTES es aumentar el número y la complejidad de las funciones. Algunos ejemplos incluyen los sistemas inteligentes de asistencia a la conducción en automóviles modernos o los sistemas avanzados de prevención de colisiones en vehiculos aereos no tripulados. Todas estas nuevas características, implementadas en software,conducen a un crecimiento exponencial tanto en los requerimientos de rendimiento como en la complejidad de desarrollo de software. Además, existe una gran necesidad de integrar múltiples funciones en una sóla plataforma para así reducir el número de unidades de procesamiento, cumplir con requisitos de peso y espacio, etc. En general, hay una clara necesidad de aumentar la potencia de cómputo de los actuales CRTES para soportar nueva funcionalidades sofisticadas y complejas e integrar múltiples sistemas en una sola plataforma. El uso de arquitecturas multi- y many-core se ve cada vez más en la industria CRTES como la solución para hacer frente a la demanda de mayor rendimiento y las limitaciones de costes de los futuros CRTES. Las arquitecturas many-core proporcionan un mayor rendimiento explotando el paralelismo de aplicaciones al tiempo que proporciona un mejor rendimiento por vatio ya que los cores se mantienen más simples con respecto a complejos procesadores de un solo core. Además, las capacidades de paralelización permiten programar múltiples funciones en el mismo procesador, maximizando la utilización del hardware. Sin embargo, el uso de multi- y many-core en CRTES también acarrea ciertos desafíos relacionados con la aportación de evidencias sobre el correcto funcionamiento del sistema, especialmente en el ámbito temporal. Por eso, a pesar de las ventajas de los procesadores many-core y del hecho de que éstos son una realidad en los sitemas integrados (por ejemplo Kalray MPPA, Freescale NXP P4080, TI Keystone II), su uso en CRTES aún precisa de la búsqueda de métodos eficientes para proveer evidencias fiables sobre el correcto funcionamiento del sistema. Esta tesis ahonda en el uso de procesadores many-core en CRTES como un medio para satisfacer los requisitos de rendimiento de aplicaciones complejas mientras proveen las garantías de tiempo necesarias. Para ello, esta tesis contribuye en el avance del estado del arte hacia la explotación de many-cores en CRTES en dos ámbitos de la computación. En el ámbito del hardware, esta tesis propone nuevos diseños many-core que posibilitan garantías de tiempo fiables y precisas. En el ámbito del software, la tesis presenta técnicas eficientes para la planificación de tareas y el análisis de tiempo para aprovechar las capacidades de paralelización en arquitecturas many-core, y también para derivar estimaciones de peor tiempo de ejecución (Worst-Case Execution Time, WCET) fiables y precisas

    Bundle: Taming The Cache And Improving Schedulability Of Multi-Threaded Hard Real-Time Systems

    Get PDF
    For hard real-time systems, schedulability of a task set is paramount. If a task set is not deemed schedulable under all conditions, the system may fail during operation and cannot be deployed in a high risk environment. Schedulability testing has typically been separated from worst-case execution time (WCET) analysis. Each task’s WCET value is calculated independently and provided as input to a schedulability test. However, a task’s WCET value is influenced by scheduling decisions and the impact of cache memory. Thus, schedulability tests have been augmented to include cache-related preemption delay (CRPD). From this classical perspective, the effect of cache memory on WCET and schedulability is always negative; increasing execution times and demand. In this work we propose a new positive perspective, where cache memory benefits multi-threaded tasks by scheduling threads in a manner that shares values predictably. This positive perspective is reached by integrating, rather than separating the disciplines of schedulability analysis and worst-case execution time. These integrated techniques are referred to as the BUNDLE family of worst-case execution time and cache overhead (WCETO) analysis and scheduling algorithm. WCETO calculation divides the task’s structure into conflict free regions and calculates a bound utilizing explicit understanding of the thread-level scheduling algorithm. Conflict free regions are utilized by the scheduling algorithm, which associates with each region a thread container called a bundle. At any time only one bundle may be active, and only threads of the active bundle may execute on the processor. The BUNDLE family of scheduling algorithms developed in this work increase in scope from BUNDLE through ITCB-DAG. As the fundamental contribution, BUNDLE and BUNDLEP apply to a single multi-threaded task running on a uniprocessor architecture with a single level direct mapped instruction cache. NPM-BUNDLE expands the positive perspective to multiple tasks on a uniprocessor system. With ITCB-DAG bringing BUNDLE’s analysis and scheduling techniques to multi-processor systems. Each of the scheduling algorithms require a novel hardware mechanism to anticipate execution and make scheduling decisions. To support anticipation of execution, a novel XFLICT interrupt is proposed. It is a simple mechanism that emulates the behavior of hardware breakpoints. An implementation of the BUNDLEP analytical techniques, scheduling algorithm, and XFLICT interrupt is available as a simulated platform for further research and extension. Future work is planned to expand BUNDLE’s positive perspective and increase adoption. The most significant barrier to adoption is the ability to deploy BUNDLE’s scheduling algorithm, this mandates a viable and available hardware or software mechanism to anticipate execution. NPM-BUNDLE is limited to non-preemptive multi-task scheduling and analysis, support for preemptive scheduling will increase the positive impact of BUNDLE’s integrated perspective

    Memory Efficient Scheduling for Multicore Real-time Systems

    Get PDF
    Modern real-time systems are becoming increasingly complex and requiring significant computational power to meet their demands. Since the increase in uniprocessor speed has slowed down in the last decade, multicore processors are now the preferred way to supply the increased performance demand of real-time systems. A significant amount of work in the real-time community has focused on scheduling solutions for multicore processors for both sequential and parallel real-time tasks. Even though such solutions are able to provide strict timing guarantees on the overall response time of real-time tasks, they rely on the assumption that the worst-case execution time (WCET) of each individual task is known. However, physical shared resources such as main memory and I/O are heavily employed in multicore processors. These resources are limited and therefore subject to contention. In fact, the execution time of one task when run in parallel with other tasks is significantly larger than the execution time of the same task when run in isolation. In addition, the presence of shared resources increases the timing unpredictability due to the conflicts generated by multiple cores. As a result, the adoption of multicore processors for real-time systems is dependent upon solving such sources of unpredictability. In this dissertation, we investigate memory bus contention. In particular, two main problems are associated with memory contention: (1) unpredictable behavior and (2) hindrance of performance. We show how to mitigate these two problems through scheduling. Scheduling is an attractive tool that can be easily integrated into the system without the need for hardware modifications. We adopt an execution model that exposes memory as a resource to the scheduling algorithm. Thus, the theory of real-time multiprocessor scheduling, that has seen significant advances in recent years, can be utilized to schedule both processor cores and memory. Since the real-time workload on multicore processors can be modeled as sequential or parallel tasks, we also study parallel task scheduling by taking memory time into account

    Mixed Criticality Systems with Weakly-Hard Constraints

    Get PDF
    Mixed criticality systems contain components of at least two criticality levels which execute on a common hardware platform in order to more efficiently utilise re- sources. Due to multiple worst-case execution time estimates, current adaptive mixed criticality scheduling policies assume the notion of a low criticality mode where by a taskset executes under a set of more realistic temporal assumptions and a high criticality mode, in which all low criticality tasks in the taskset are descheduled, to ensure that high criticality tasks can meet more conservative timing constraints derived from certification approved methods. This issue is known as the service abrupt problem and comprises the topic of this work. The principles of real-time schedulability analysis are first reviewed, providing relevant background and theory on which mixed criticality systems analysis is based. The current state-of-the-art of mixed criticality systems scheduling policies on uni-processor systems are then discussed along with the major challenges facing the adoption of such approaches in practice. To address the service abrupt issue this work presents a new policy, Adaptive Mixed Criticality - Weakly Hard which provides a guaranteed minimum quality of service for low criticality tasks in the event of a criticality mode change. Two offline response time based schedulability tests are derived for this model and dominance relationship proved. Empirical evaluations are then used to assess the relative performance against previously published policies and their schedulability tests, where the new policy is shown to offer a scalable performance trade-off between existing fixed priority preemptive and adaptive mixed criticality policies. The work concludes with possible directions for future research

    Resource-aware scheduling for 2D/3D multi-/many-core processor-memory systems

    Get PDF
    This dissertation addresses the complexities of 2D/3D multi-/many-core processor-memory systems, focusing on two key areas: enhancing timing predictability in real-time multi-core processors and optimizing performance within thermal constraints. The integration of an increasing number of transistors into compact chip designs, while boosting computational capacity, presents challenges in resource contention and thermal management. The first part of the thesis improves timing predictability. We enhance shared cache interference analysis for set-associative caches, advancing the calculation of Worst-Case Execution Time (WCET). This development enables accurate assessment of cache interference and the effectiveness of partitioned schedulers in real-world scenarios. We introduce TCPS, a novel task and cache-aware partitioned scheduler that optimizes cache partitioning based on task-specific WCET sensitivity, leading to improved schedulability and predictability. Our research explores various cache and scheduling configurations, providing insights into their performance trade-offs. The second part focuses on thermal management in 2D/3D many-core systems. Recognizing the limitations of Dynamic Voltage and Frequency Scaling (DVFS) in S-NUCA many-core processors, we propose synchronous thread migrations as a thermal management strategy. This approach culminates in the HotPotato scheduler, which balances performance and thermal safety. We also introduce 3D-TTP, a transient temperature-aware power budgeting strategy for 3D-stacked systems, reducing the need for Dynamic Thermal Management (DTM) activation. Finally, we present 3QUTM, a novel method for 3D-stacked systems that combines core DVFS and memory bank Low Power Modes with a learning algorithm, optimizing response times within thermal limits. This research contributes significantly to enhancing performance and thermal management in advanced processor-memory systems

    Increasing the reliability and applicability of measurement-based probabilistic timing analysis

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Automação e Sistemas, Florianópolis, 2019Conforme a complexidade das arquiteturas computacionais aumenta para melhorar desempenho ou reduzir custos, o uso de processadores modernos em Sistemas de Tempo Real (STRs) é prejudicado cada vez mais pelo surgimento de efeitos temporais que dificultam a obtenção de limites confiáveis e precisos para os Worst-Case Execution Times (WCETs) de tarefas. A Análise Temporal Probabilística Baseada em Medições (ATPBM) visa determinar limites probabilísticos de WCET (i.e. pWCETs) aplicando a Teoria de Valores Extremos (TVE) sobre medições de tempos de execução, e é portanto promissora no tratamento da complexidade de hardware no projeto de STRs. Processadores temporalmente aleatorizados foram recentemente propostos para tornar o comportamento temporal de sistemas computacionais mais facilmente analisável através de ferramental probabilístico, e são projetados substituindo informações internas determinísticas ou especulativas por números (pseudo-)aleatórios. A pesquisa cujos resultados são apresentados nesta tese produziu contribuições em duas frentes distintas. Em primeiro lugar, foram propostos e aplicados métodos para avaliar a confiabilidade dos pWCETs produzidos pela ATPBM, baseados na coleta de grandes amostras de tempos de execução e na comparação (1) dos pWCETs com os maiores tempos de execução observados, e (2) das densidades de excedência dos pWCETs com seus valores esperados. Essas avaliações indicaram que modelos probabilísticos da TVE projetados para gerar margens mais precisas podem muitas vezes levar a subestimativas de pWCETs, e recomendou-se então que modelos sobrestimadores devem ser utilizados para obter-se pWCETs mais confiáveis. Em segundo lugar, avaliou-se a hipótese de que técnicas de escalonamento aleatorizado podem beneficiar a análise temporal de tarefas executadas em pipelines multithread através da ATPBM, por levarem os tempos de execução produzidos a atenderem às premissas básicas de aplicabilidade da técnica. Para tal, foram considerados tanto (A) um escalonador puramente aleatório, quanto (B) um escalonador aleatorizado capaz de limitar os efeitos temporais da interferência entre threads, sem comprometer sua analisabilidade pela ATPBM, através de um mecanismo de regulação de elegibilidade baseado em créditos.Abstract: As the complexity of computer architectures grows in order to improve performance and/or to reduce costs, the use of modern processors in the design of Real-Time Systems (RTSs) is increasingly hampered by the emergence of timing effects that challenge determining reliable and tight bounds for tasks' Worst-Case Execution Times (WCETs). The Measurement-Based Probabilistic Timing Analysis (MBPTA) technique aims determining probabilistic WCET bounds (i.e. pWCETs) by applying Extreme Value Theory (EVT) on tasks' execution time measurements, and is hence promising in handling hardware complexity issues within RTSs' design. Hardware-level time-randomized processors were recently proposed as a means to cause computing systems' timing behaviour to become more easily analysable through probabilistic tools, and are designed replacing deterministic or speculative internal information with (pseudo-)random numbers. The scientific research whose outcomes are presented in this thesis produced contributions on two distinct fronts. In first place, we proposed and applied methods for evaluating the reliability of pWCET estimates produced using MBPTA, based on collecting large execution time samples and then comparing (1) the pWCETs against the largest observed execution times, and (2) pWCETs' exceedance densities against their expected values. These evaluations led us to conclude that EVT probabilistic models intended to yield more precise bounds may often lead to pWCET underestimations, and we hence recommended that upper-bounding models should instead be used for deriving pWCETs with increased reliability. In second place, we evaluated the hypothesis that randomized scheduling techniques can benefit the timing analysis of tasks executed on multithread pipelines through MBPTA, by causing the yielded execution times to meet the technique's basic application requirements. For that, we considered both (A) a scheduler that employs a purely random policy, and (B) a randomized scheduler capable of limiting the timing effects of inter-thread interference, without compromising analysability, by using a credit-based eligibility regulation mechanism
    corecore