292 research outputs found

    220606

    Get PDF
    To reduce the latency of time-sensitive flows in Ethernet networks, the IEEE TSN Task Group introduced the IEEE 802.1Qbu Standard, which specifies a 1-level preemption scheme for IEEE 802.1 networks. Recently, serious limitations of this scheme w.r.t. flows responsiveness were exposed and the so-called multi-level preemption approach was proposed to address these drawbacks. As is the case with most, if not all, real-time and/or time-sensitive preemptive systems, an appropriate priority-to-flow assignment policy plays a central role in the resulting performance of both 1-level and multi-level preemption schemes to avoid the over-provisioning and/or the sub-optimal use of hardware resources. Yet on another front, the multi-level preemption scheme raises new configuration challenges. Specifically, the right number of preemption level(s) to enable for swift transmission of flows; and the flow-to-preemption-class assignment synthesis remain open problems. To the best of our knowledge, there is no prior work in the literature addressing these important challenges. In this work, we address these three challenges. We demonstrate the applicability of our proposed solution by using both synthetic and real-life use-cases. Our experimental results show that multi-level preemption schemes improve the schedulability of flows by over 12% as compared to a 1-level preemption scheme, and at a higher abstraction level, the proposed configuration framework improves the schedulability of flows by up to 6% as compared to the dominant Deadline Monotonic Priority Ordering.This work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDP/UIDB/04234/2020); also by FCT through the European Social Fund (ESF) and the Regional Operational Programme (ROP) Norte 2020, under grant 2020.09636.BD.info:eu-repo/semantics/publishedVersio

    Optimization and Mining Methods for Effective Real-Time Embedded Systems

    Get PDF
    L’Internet des objets (IoT) est le réseau d’objets interdépendants, comme les voitures autonomes, les appareils électroménagers, les téléphones intelligents et d’autres systèmes embarqués. Ces systèmes embarqués combinent le matériel, le logiciel et la connection réseau permettant le traitement de données à l’aide des puissants centres de données de l’informatique nuagique. Cependant, la croissance exponentielle des applications de l’IoT a remodelé notre croyance sur l’informatique nuagique, et des certitudes durables sur ses capacités ont dû être mises à jour. De nos jours, l’informatique nuagique centralisé et classique rencontre plusieurs défis, tels que la latence du trafic, le temps de réponse et la confidentialité des données. Alors, la tendance dans le traitement des données générées par les dispositifs embarqués interconnectés consiste à faire plus de calcul au niveau du dispositif au bord du réseau. Cette possibilité de faire du traitement local aide à réduire la latence pour les applications temps réel présentant des fortes contraintes temporelles. Aussi, ça permet d’améliorer le traitement des quantités massives de données générées par ces périphériques. Réussir cette transition nécessite la conception de systèmes embarqués de haute performance en explorant efficacement les alternatives de conception (i.e. Exploration efficace de l’espace des solutions), en optimisant la topologie de déploiement des applications temps réel sur des architectures multi-processeurs (i.e. la façon dont le logiciel utilise le matériel) , et des algorithme d’exploration permettant un fonctionnement plus intelligent de ces dispositifs. Des efforts de recherche récents ont conduit à diverses approches automatisées facilitant la conception et l’amélioration du fonctionnement des système embarqués. Cependant, ces techniques existantes présentent plusieurs défis majeurs. Ces défis sont fortement présents sur les systèmes embarqués temps réel. Quatre des principaux défis sont : (1) Le manque de techniques d’exploration de données en ligne permettant l’amélioration des performances des systèmes embarqués. (2) L’utilisation inefficace des ressources informatiques des systèmes multiprocesseurs lors du déploiement de logiciels là dessus ; (3) L’exploration pseudo-aléatoire de l’espace des solutions (4) La sélection de la configuration appropriée à partir de la listes des solutions optimales obtenue.----------ABSTRACT: The Internet of things (IoT) is the network of interrelated devices or objects, such as selfdriving cars, home appliances, smart-phones and other embedded computing systems. It combines hardware, software, and network connectivity enabling data processing using powerful cloud data centers. However, the exponential rise of IoT applications reshaped our belief on the cloud computing, and long-lasting certainties about its capabilities had to be updated. The classical centralized cloud computing is encountering several challenges, such as traffic latency, response time, and data privacy. Thus, the trend in the processing of the generated data of IoT inter-connected embedded devices has shifted towards doing more computation closer to the device in the edge of the network. This possibility to do on-device processing helps to reduce latency for critical real-time applications and better processing of the massive amounts of data being generated by the these devices. Succeeding this transition towards the edge computing requires the design of high-performance embedded systems by efficiently exploring design alternatives (i.e. efficient Design Space Exploration), optimizing the deployment topology of multi-processor based real-time embedded systems (i.e. the way the software utilizes the hardware), and light mining techniques enabling smarter functioning of these devices. Recent research efforts on embedded systems have led to various automated approaches facilitating the design and the improvement of their functioning. However, existing methods and techniques present several major challenges. These challenges are more relevant when it comes to real-time embedded systems. Four of the main challenges are : (1) The lack of online data mining techniques that can enhance embedded computing systems functioning on the fly ; (2) The inefficient usage of computing resources of multi-processor systems when deploying software on ; (3) The pseudo-random exploration of the design space ; (4) The selection of the suitable implementation after performing the otimization process

    Memory-Aware Genetic Algorithms for Task Mapping on Hard Real-Time Networks-on-Chip

    Get PDF
    The problem of mapping hard real-time tasks onto networks-on-chip has previously been successfully addressed by genetic algorithms. However, none of the existing problem formulations consider memory constraints. State-of-the-art genetic mappers are therefore able to find fully-schedulable mappings which are incompatible with the memory limitations of realistic platforms. In this paper, we extend the problem formulation and devise a memory architecture, in the form of private local memories. We then propose three memory models of increasing complexity and realism, and evaluate the impact these additional constraints pose to the genetic search. We conduct extensive experiments using tasks and communications from a realistic benchmark application, and compare the proposed approach against a state-of-the-art baseline mapper

    A Probabilistic Approach for the System-Level Design of Multi-ASIP Platforms

    Get PDF

    Software development of reconfigurable real-time systems : from specification to implementation

    Get PDF
    This thesis deals with reconfigurable real-time systems solving real-time tasks scheduling problems in a mono-core and multi-core architectures. The main focus in this thesis is on providing guidelines, methods, and tools for the synthesis of feasible reconfigurable real-time systems in a mono-processor and multi-processor architectures. The development of these systems faces various challenges particularly in terms of stability, energy consumption, response and blocking time. To address this problem, we propose in this work a new strategy of i) placement and scheduling of tasks to execute real-time applications on mono-core and multi-core architectures, ii) optimization step based on Mixed integer linear programming (MILP), and iii) guidance tool that assists designers to implement a feasible multi-core reconfigurable real-time from specification level to implementation level. We apply and simulate the contribution to a case study, and compare the proposed results with related works in order to show the originality of this methodology.Echtzeitsysteme laufen unter harten Bedingungen an ihre Ausführungszeit. Die Einhaltung der Echtzeit-Bedingungen bestimmt die Zuverlässigkeit und Genauigkeit dieser Systeme. Neben den Echtzeit-Bedingungen müssen rekonfigurierbare Echtzeitsysteme zusätzliche Rekonfigurations-Bedingungen erfüllen. Diese Arbeit beschäftigt sich mit rekonfigurierbaren Echtzeitsystemen in Mono- und Multicore-Architekturen. An die Entwicklung dieser Systeme sind verschiedene Anforderungen gestellt. Insbesondere muss die Rekonfigurierbarkeit beachtet werden. Dabei sind aber Echtzeit-Bedingungen und Ressourcenbeschränkungen weiterhin zu beachten. Darüber hinaus werden die Kosten für die Entwicklung dieser Systeme insbesondere durch falsche Designentscheidungen in den frühen Phasen der Entwicklung stark beeinträchtigt. Das Hauptziel in dieser Arbeit liegt deshalb auf der Bereitstellung von Handlungsempfehlungen, Methoden und Werkzeugen für die zielgerichtete Entwicklung von realisierbaren rekonfigurierbaren Echtzeitsystemen in Mono- und Multicore-Architekturen. Um diese Herausforderungen zu adressieren wird eine neue Strategie vorgeschlagen, die 1) die Funktionsallokation, 2) die Platzierung und das Scheduling von Tasks, 3) einen Optimierungsschritt auf der Basis von Mixed Integer Linear Programming (MILP) und 4) eine entscheidungsunterstützende Lösung umfasst, die den Designern hilft, eine realisierbare rekonfigurierbare Echtzeitlösung von der Spezifikationsebene bis zur Implementierungsebene zu entwickeln. Die vorgeschlagene Methodik wird auf eine Fallstudie angewendet und mit verwandten Arbeiten vergliche

    Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    Full text link
    Coordinating agents to complete a set of tasks with intercoupled temporal and resource constraints is computationally challenging, yet human domain experts can solve these difficult scheduling problems using paradigms learned through years of apprenticeship. A process for manually codifying this domain knowledge within a computational framework is necessary to scale beyond the ``single-expert, single-trainee" apprenticeship model. However, human domain experts often have difficulty describing their decision-making processes, causing the codification of this knowledge to become laborious. We propose a new approach for capturing domain-expert heuristics through a pairwise ranking formulation. Our approach is model-free and does not require enumerating or iterating through a large state space. We empirically demonstrate that this approach accurately learns multifaceted heuristics on a synthetic data set incorporating job-shop scheduling and vehicle routing problems, as well as on two real-world data sets consisting of demonstrations of experts solving a weapon-to-target assignment problem and a hospital resource allocation problem. We also demonstrate that policies learned from human scheduling demonstration via apprenticeship learning can substantially improve the efficiency of a branch-and-bound search for an optimal schedule. We employ this human-machine collaborative optimization technique on a variant of the weapon-to-target assignment problem. We demonstrate that this technique generates solutions substantially superior to those produced by human domain experts at a rate up to 9.5 times faster than an optimization approach and can be applied to optimally solve problems twice as complex as those solved by a human demonstrator.Comment: Portions of this paper were published in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper consists of 50 pages with 11 figures and 4 table

    Model-based optimization of ARINC-653 partition scheduling

    Get PDF

    Resource-aware scheduling for 2D/3D multi-/many-core processor-memory systems

    Get PDF
    This dissertation addresses the complexities of 2D/3D multi-/many-core processor-memory systems, focusing on two key areas: enhancing timing predictability in real-time multi-core processors and optimizing performance within thermal constraints. The integration of an increasing number of transistors into compact chip designs, while boosting computational capacity, presents challenges in resource contention and thermal management. The first part of the thesis improves timing predictability. We enhance shared cache interference analysis for set-associative caches, advancing the calculation of Worst-Case Execution Time (WCET). This development enables accurate assessment of cache interference and the effectiveness of partitioned schedulers in real-world scenarios. We introduce TCPS, a novel task and cache-aware partitioned scheduler that optimizes cache partitioning based on task-specific WCET sensitivity, leading to improved schedulability and predictability. Our research explores various cache and scheduling configurations, providing insights into their performance trade-offs. The second part focuses on thermal management in 2D/3D many-core systems. Recognizing the limitations of Dynamic Voltage and Frequency Scaling (DVFS) in S-NUCA many-core processors, we propose synchronous thread migrations as a thermal management strategy. This approach culminates in the HotPotato scheduler, which balances performance and thermal safety. We also introduce 3D-TTP, a transient temperature-aware power budgeting strategy for 3D-stacked systems, reducing the need for Dynamic Thermal Management (DTM) activation. Finally, we present 3QUTM, a novel method for 3D-stacked systems that combines core DVFS and memory bank Low Power Modes with a learning algorithm, optimizing response times within thermal limits. This research contributes significantly to enhancing performance and thermal management in advanced processor-memory systems
    • …
    corecore