869 research outputs found

    Resource-aware scheduling for 2D/3D multi-/many-core processor-memory systems

    Get PDF
    This dissertation addresses the complexities of 2D/3D multi-/many-core processor-memory systems, focusing on two key areas: enhancing timing predictability in real-time multi-core processors and optimizing performance within thermal constraints. The integration of an increasing number of transistors into compact chip designs, while boosting computational capacity, presents challenges in resource contention and thermal management. The first part of the thesis improves timing predictability. We enhance shared cache interference analysis for set-associative caches, advancing the calculation of Worst-Case Execution Time (WCET). This development enables accurate assessment of cache interference and the effectiveness of partitioned schedulers in real-world scenarios. We introduce TCPS, a novel task and cache-aware partitioned scheduler that optimizes cache partitioning based on task-specific WCET sensitivity, leading to improved schedulability and predictability. Our research explores various cache and scheduling configurations, providing insights into their performance trade-offs. The second part focuses on thermal management in 2D/3D many-core systems. Recognizing the limitations of Dynamic Voltage and Frequency Scaling (DVFS) in S-NUCA many-core processors, we propose synchronous thread migrations as a thermal management strategy. This approach culminates in the HotPotato scheduler, which balances performance and thermal safety. We also introduce 3D-TTP, a transient temperature-aware power budgeting strategy for 3D-stacked systems, reducing the need for Dynamic Thermal Management (DTM) activation. Finally, we present 3QUTM, a novel method for 3D-stacked systems that combines core DVFS and memory bank Low Power Modes with a learning algorithm, optimizing response times within thermal limits. This research contributes significantly to enhancing performance and thermal management in advanced processor-memory systems

    Towards a centralized multicore automotive system

    Get PDF
    Today’s automotive systems are inundated with embedded electronics to host chassis, powertrain, infotainment, advanced driver assistance systems, and other modern vehicle functions. As many as 100 embedded microcontrollers execute hundreds of millions of lines of code in a single vehicle. To control the increasing complexity in vehicle electronics and services, automakers are planning to consolidate different on-board automotive functions as software tasks on centralized multicore hardware platforms. However, these vehicle software services have different and contrasting timing, safety, and security requirements. Existing vehicle operating systems are ill-equipped to provide all the required service guarantees on a single machine. A centralized automotive system aims to tackle this by assigning software tasks to multiple criticality domains or levels according to their consequences of failures, or international safety standards like ISO 26262. This research investigates several emerging challenges in time-critical systems for a centralized multicore automotive platform and proposes a novel vehicle operating system framework to address them. This thesis first introduces an integrated vehicle management system (VMS), called DriveOS™, for a PC-class multicore hardware platform. Its separation kernel design enables temporal and spatial isolation among critical and non-critical vehicle services in different domains on the same machine. Time- and safety-critical vehicle functions are implemented in a sandboxed Real-time Operating System (OS) domain, and non-critical software is developed in a sandboxed general-purpose OS (e.g., Linux, Android) domain. To leverage the advantages of model-driven vehicle function development, DriveOS provides a multi-domain application framework in Simulink. This thesis also presents a real-time task pipeline scheduling algorithm in multiprocessors for communication between connected vehicle services with end-to-end guarantees. The benefits and performance of the overall automotive system framework are demonstrated with hardware-in-the-loop testing using real-world applications, car datasets and simulated benchmarks, and with an early-stage deployment in a production-grade luxury electric vehicle

    Towards Efficient Explainability of Schedulability Properties in Real-Time Systems

    Get PDF
    The notion of efficient explainability was recently introduced in the context of hard-real-time scheduling: a claim that a real-time system is schedulable (i.e., that it will always meet all deadlines during run-time) is defined to be efficiently explainable if there is a proof of such schedulability that can be verified by a polynomial-time algorithm. We further explore this notion by (i) classifying a variety of common schedulability analysis problems according to whether they are efficiently explainable or not; and (ii) developing strategies for dealing with those determined to not be efficiently schedulable, primarily by identifying practically meaningful sub-problems that are efficiently explainable

    Worst-Case Communication Time Analysis for On-Chip Networks with Finite Buffers

    Get PDF
    Network-on-Chip (NoC) is the ideal interconnection architecture for many-core systems due to its superior scalability and performance. An NoC must deliver critical messages from a realtime application within specific deadlines. A violation of this requirement may compromise the entire system operation. Therefore, a series of experiments considering worst-case scenarios must be conducted to verify if deadlines can be satisfied. However, simulation-based experiments are time-consuming, and one alternative is schedulability analysis. In this context, this work proposes a schedulability analysis to accelerate design space exploration in real-time applications on NoC-based systems. The proposed worstcase analysis estimates the maximum latency of traffic flows assuming direct and indirect blocking. Besides, we consider the size of buffers to reduce the analysis’ pessimism. We also present an extension of the analysis, including self-blocking. We conduct a series of experiments to evaluate the proposed analysis using a cycle-accurate simulator. The experimental results show that the proposed solution presents tighter results and runs four orders of magnitude faster than the simulation.N/

    Tехнічні засоби діагностування та контролю бортових систем інформаційного обміну на літаку

    Get PDF
    Робота публікується згідно наказу ректора від 27.05.2021 р. №311/од "Про розміщення кваліфікаційних робіт вищої освіти в репозиторії НАУ". Керівник дипломної роботи: доцент кафедри авіоніки, Слободян Олександр ПетровичТехнічний прогрес в авіаційній та будь-якій іншій галузі тісно пов'язаний з автоматизацією технологічних процесів. Сьогодні Автоматизація технологічних процесів використовується для підвищення характеристик надійності, довговічності, екологічності, ресурсозбереження і, найголовніше, економічності і простоти експлуатації. Завдяки швидкому розвитку комп'ютерних технологій і мікропроцесорів у нас є можливість використовувати більш досконалі і складні методи моніторингу та управління системами авіаційної промисловості і будь-якими іншими. Мікропроцесорні та електронні обчислювальні пристрої, з'єднані обчислювальними і керуючими мережами з використанням загальних баз даних, мають стандарти, що дозволяють модифікувати і інтегрувати нові пристрої, що, в свою чергу, дозволяє інтегрувати і вдосконалювати виробничі процеси і управляти ними. Проектування системи розподіленої інтегрованої модульної авіоніки (DIMA) з використанням розподіленої інтегрованої технології, змішаного планування критичних завдань, резервний планування в режимі реального часу і механізму зв'язку, який запускається за часом, значно підвищує надійність, безпеку і продуктивність інтегрованої електронної системи в режимі реального часу. DIMA являє собою тенденцію розвитку майбутніх систем авіоніки. У цій статті вивчаються і обговорюються архітектурні характеристики DIMA. Потім він детально вивчає та аналізує розвиток ключових технологій в системі DIMA. Нарешті, в ньому розглядається тенденція розвитку технології DIMA

    Precise Scheduling of DAG Tasks with Dynamic Power Management

    Get PDF

    Multi-Criteria Optimization of Real-Time DAGs on Heterogeneous Platforms under P-EDF

    Get PDF
    This paper tackles the problem of optimal placement of complex real-time embedded applications on heterogeneous platforms. Applications are composed of directed acyclic graphs of tasks, with each DAG having a minimum inter-arrival period for its activation requests, and an end-to-end deadline within which all of the computations need to terminate since each activation. The platforms of interest are heterogeneous power-aware multi-core platforms with DVFS capabilities, including big.LITTLE Arm architectures, and platforms with GPU or FPGA hardware accelerators with Dynamic Partial Reconfiguration capabilities. Tasks can be deployed on CPUs using partitioned EDF-based scheduling. Additionally, some of the tasks may have an alternate implementation available for one of the accelerators on the target platform, which are assumed to serve requests in non-preemptive FIFO order. The system can be optimized by: minimizing power consumption, respecting precise timing constraints; maximizing the applications’ slack, respecting given power consumption constraints; or even a combination of these, in a multi-objective formulation. We propose an off-line optimization of the mentioned problem based on mixed-integer quadratic constraint programming (MIQCP). The optimization provides the DVFS configuration of all the CPUs (or accelerators) capable of frequency switching and the placement to be followed by each task in the DAGs, including the software-vs-hardware implementation choice for tasks that can be hardware-accelerated. For relatively big problems, we developed heuristic solvers capable of providing suboptimal solutions in a significantly reduced time compared to the MIQCP strategy, thus widening the applicability of the proposed framework. We validate the approach by running a set of randomly generated DAGs on Linux under SCHED_DEADLINE, deployed onto two real boards, one with Arm big.LITTLE architecture, the other with FPGA acceleration, verifying that the experimental runs meet the theoretical expectations in terms of timing and power optimization goals

    Using Simultaneous Multithreading to Support Real-Time Scheduling

    Get PDF
    The goal of real-time scheduling is to find a way to schedule every program in a specified system without unacceptable deadline misses. If doing so on a given hardware platform is not possible, then the question to ask is ``What can be changed?'' Simultaneous multithreading (SMT) is a technology that allows a single computer core to execute multiple programs at once, at the cost of increasing the time required to execute individual programs. SMT has been shown to improve performance in many areas of computing, but SMT has seen little application to the real-time domain. Reasons for not using SMT in real-time systems include the difficulty of knowing how much execution time a program will require when SMT is in use, concerns that longer execution times could cause unacceptable deadline misses, and the difficulty of deciding which programs should and should not use SMT to share a core. This dissertation shows how SMT can be used to support real-time scheduling in both the hard real-time (HRT) case, where deadline misses are never acceptable, and the soft real-time (SRT) case, where deadline misses are undesirable but tolerable. Contributions can be divided into three categories. First, the effects of SMT on execution times are measured and parameters for modeling the effects of SMT are given. Second, scheduling algorithms for the SRT case that take advantage of SMT are given and evaluated. Third, scheduling algorithms for the HRT case are given and evaluated. In both the SRT and HRT cases, using the proposed algorithms do not lead to unacceptable deadline misses and can have effects similar to increasing a platform's core count by a third or more.Doctor of Philosoph

    Software Fault Tolerance in Real-Time Systems: Identifying the Future Research Questions

    Get PDF
    Tolerating hardware faults in modern architectures is becoming a prominent problem due to the miniaturization of the hardware components, their increasing complexity, and the necessity to reduce the costs. Software-Implemented Hardware Fault Tolerance approaches have been developed to improve the system dependability to hardware faults without resorting to custom hardware solutions. However, these come at the expense of making the satisfaction of the timing constraints of the applications/activities harder from a scheduling standpoint. This paper surveys the current state of the art of fault tolerance approaches when used in the context real-time systems, identifying the main challenges and the cross-links between these two topics. We propose a joint scheduling-failure analysis model that highlights the formal interactions among software fault tolerance mechanisms and timing properties. This model allows us to present and discuss many open research questions with the final aim to spur the future research activities
    corecore