524 research outputs found

    A Survey and Comparative Study of Hard and Soft Real-time Dynamic Resource Allocation Strategies for Multi/Many-core Systems

    Get PDF
    Multi-/many-core systems are envisioned to satisfy the ever-increasing performance requirements of complex applications in various domains such as embedded and high-performance computing. Such systems need to cater to increasingly dynamic workloads, requiring efficient dynamic resource allocation strategies to satisfy hard or soft real-time constraints. This article provides an extensive survey of hard and soft real-time dynamic resource allocation strategies proposed since the mid-1990s and highlights the emerging trends for multi-/many-core systems. The survey covers a taxonomy of the resource allocation strategies and considers their various optimization objectives, which have been used to provide comprehensive comparison. The strategies employ various principles, such as market and biological concepts, to perform the optimizations. The trend followed by the resource allocation strategies, open research challenges, and likely emerging research directions have also been provided

    A Survey of Fault-Tolerance Techniques for Embedded Systems from the Perspective of Power, Energy, and Thermal Issues

    Get PDF
    The relentless technology scaling has provided a significant increase in processor performance, but on the other hand, it has led to adverse impacts on system reliability. In particular, technology scaling increases the processor susceptibility to radiation-induced transient faults. Moreover, technology scaling with the discontinuation of Dennard scaling increases the power densities, thereby temperatures, on the chip. High temperature, in turn, accelerates transistor aging mechanisms, which may ultimately lead to permanent faults on the chip. To assure a reliable system operation, despite these potential reliability concerns, fault-tolerance techniques have emerged. Specifically, fault-tolerance techniques employ some kind of redundancies to satisfy specific reliability requirements. However, the integration of fault-tolerance techniques into real-time embedded systems complicates preserving timing constraints. As a remedy, many task mapping/scheduling policies have been proposed to consider the integration of fault-tolerance techniques and enforce both timing and reliability guarantees for real-time embedded systems. More advanced techniques aim additionally at minimizing power and energy while at the same time satisfying timing and reliability constraints. Recently, some scheduling techniques have started to tackle a new challenge, which is the temperature increase induced by employing fault-tolerance techniques. These emerging techniques aim at satisfying temperature constraints besides timing and reliability constraints. This paper provides an in-depth survey of the emerging research efforts that exploit fault-tolerance techniques while considering timing, power/energy, and temperature from the real-time embedded systems’ design perspective. In particular, the task mapping/scheduling policies for fault-tolerance real-time embedded systems are reviewed and classified according to their considered goals and constraints. Moreover, the employed fault-tolerance techniques, application models, and hardware models are considered as additional dimensions of the presented classification. Lastly, this survey gives deep insights into the main achievements and shortcomings of the existing approaches and highlights the most promising ones

    A Survey of Techniques For Improving Energy Efficiency in Embedded Computing Systems

    Full text link
    Recent technological advances have greatly improved the performance and features of embedded systems. With the number of just mobile devices now reaching nearly equal to the population of earth, embedded systems have truly become ubiquitous. These trends, however, have also made the task of managing their power consumption extremely challenging. In recent years, several techniques have been proposed to address this issue. In this paper, we survey the techniques for managing power consumption of embedded systems. We discuss the need of power management and provide a classification of the techniques on several important parameters to highlight their similarities and differences. This paper is intended to help the researchers and application-developers in gaining insights into the working of power management techniques and designing even more efficient high-performance embedded systems of tomorrow

    Learning-based run-time power and energy management of multi/many-core systems: current and future trends

    Get PDF
    Multi/Many-core systems are prevalent in several application domains targeting different scales of computing such as embedded and cloud computing. These systems are able to fulfil the everincreasing performance requirements by exploiting their parallel processing capabilities. However, effective power/energy management is required during system operations due to several reasons such as to increase the operational time of battery operated systems, reduce the energy cost of datacenters, and improve thermal efficiency and reliability. This article provides an extensive survey of learning-based run-time power/energy management approaches. The survey includes a taxonomy of the learning-based approaches. These approaches perform design-time and/or run-time power/energy management by employing some learning principles such as reinforcement learning. The survey also highlights the trends followed by the learning-based run-time power management approaches, their upcoming trends and open research challenges

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends

    Compiler-directed energy reduction using dynamic voltage scaling and voltage Islands for embedded systems

    Get PDF
    Cataloged from PDF version of article.Addressing power and energy consumption related issues early in the system design flow ensures good design and minimizes iterations for faster turnaround time. In particular, optimizations at software level, e.g., those supported by compilers, are very important for minimizing energy consumption of embedded applications. Recent research demonstrates that voltage islands provide the flexibility to reduce power by selectively shutting down the different regions of the chip and/or running the select parts of the chip at different voltage/frequency levels. As against most of the prior work on voltage islands that mainly focused on the architecture design and IP placement related issues, this paper studies the necessary software compiler support for voltage islands. Specifically, we focus on an embedded multiprocessor architecture that supports both voltage islands and control domains within these islands, and determine how an optimizing compiler can automatically map an embedded application onto this architecture. Such an automated support is critical since it is unrealistic to expect an application programmer to reach a good mapping correlating multiple factors such as performance and energy at the same time. Our experiments with the proposed compiler support show that our approach is very effective in reducing energy consumption. The experiments also show that the energy savings we achieve are consistent across a wide range of values of our major simulation parameters

    A Survey of Pipelined Workflow Scheduling: Models and Algorithms

    Get PDF
    International audienceA large class of applications need to execute the same workflow on different data sets of identical size. Efficient execution of such applications necessitates intelligent distribution of the application components and tasks on a parallel machine, and the execution can be orchestrated by utilizing task-, data-, pipelined-, and/or replicated-parallelism. The scheduling problem that encompasses all of these techniques is called pipelined workflow scheduling, and it has been widely studied in the last decade. Multiple models and algorithms have flourished to tackle various programming paradigms, constraints, machine behaviors or optimization goals. This paper surveys the field by summing up and structuring known results and approaches

    DESIGN METHODOLOGIES FOR RELIABLE AND ENERGY-EFFICIENT MULTIPROCESSOR SYSTEM

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Optimization and Mining Methods for Effective Real-Time Embedded Systems

    Get PDF
    L’Internet des objets (IoT) est le réseau d’objets interdépendants, comme les voitures autonomes, les appareils électroménagers, les téléphones intelligents et d’autres systèmes embarqués. Ces systèmes embarqués combinent le matériel, le logiciel et la connection réseau permettant le traitement de données à l’aide des puissants centres de données de l’informatique nuagique. Cependant, la croissance exponentielle des applications de l’IoT a remodelé notre croyance sur l’informatique nuagique, et des certitudes durables sur ses capacités ont dû être mises à jour. De nos jours, l’informatique nuagique centralisé et classique rencontre plusieurs défis, tels que la latence du trafic, le temps de réponse et la confidentialité des données. Alors, la tendance dans le traitement des données générées par les dispositifs embarqués interconnectés consiste à faire plus de calcul au niveau du dispositif au bord du réseau. Cette possibilité de faire du traitement local aide à réduire la latence pour les applications temps réel présentant des fortes contraintes temporelles. Aussi, ça permet d’améliorer le traitement des quantités massives de données générées par ces périphériques. Réussir cette transition nécessite la conception de systèmes embarqués de haute performance en explorant efficacement les alternatives de conception (i.e. Exploration efficace de l’espace des solutions), en optimisant la topologie de déploiement des applications temps réel sur des architectures multi-processeurs (i.e. la façon dont le logiciel utilise le matériel) , et des algorithme d’exploration permettant un fonctionnement plus intelligent de ces dispositifs. Des efforts de recherche récents ont conduit à diverses approches automatisées facilitant la conception et l’amélioration du fonctionnement des système embarqués. Cependant, ces techniques existantes présentent plusieurs défis majeurs. Ces défis sont fortement présents sur les systèmes embarqués temps réel. Quatre des principaux défis sont : (1) Le manque de techniques d’exploration de données en ligne permettant l’amélioration des performances des systèmes embarqués. (2) L’utilisation inefficace des ressources informatiques des systèmes multiprocesseurs lors du déploiement de logiciels là dessus ; (3) L’exploration pseudo-aléatoire de l’espace des solutions (4) La sélection de la configuration appropriée à partir de la listes des solutions optimales obtenue.----------ABSTRACT: The Internet of things (IoT) is the network of interrelated devices or objects, such as selfdriving cars, home appliances, smart-phones and other embedded computing systems. It combines hardware, software, and network connectivity enabling data processing using powerful cloud data centers. However, the exponential rise of IoT applications reshaped our belief on the cloud computing, and long-lasting certainties about its capabilities had to be updated. The classical centralized cloud computing is encountering several challenges, such as traffic latency, response time, and data privacy. Thus, the trend in the processing of the generated data of IoT inter-connected embedded devices has shifted towards doing more computation closer to the device in the edge of the network. This possibility to do on-device processing helps to reduce latency for critical real-time applications and better processing of the massive amounts of data being generated by the these devices. Succeeding this transition towards the edge computing requires the design of high-performance embedded systems by efficiently exploring design alternatives (i.e. efficient Design Space Exploration), optimizing the deployment topology of multi-processor based real-time embedded systems (i.e. the way the software utilizes the hardware), and light mining techniques enabling smarter functioning of these devices. Recent research efforts on embedded systems have led to various automated approaches facilitating the design and the improvement of their functioning. However, existing methods and techniques present several major challenges. These challenges are more relevant when it comes to real-time embedded systems. Four of the main challenges are : (1) The lack of online data mining techniques that can enhance embedded computing systems functioning on the fly ; (2) The inefficient usage of computing resources of multi-processor systems when deploying software on ; (3) The pseudo-random exploration of the design space ; (4) The selection of the suitable implementation after performing the otimization process
    corecore