711 research outputs found

    Fault Tolerant Task Mapping in Many-Core Systems

    Get PDF
    The advent of many-core systems, a network on chip containing hundreds or thousands of homogeneous processors cores, present new challenges in managing the cores effectively in response to processing demands, hardware faults and the need for heat management. Continually diminishing feature size of devices increase the probability of fabrication de- fects and the variability of performance of individual transistors. In many-core systems this can result in the failure of individual processing cores, routing nodes or communication links, which require the use of fault tolerant mechanisms. Diminishing feature size also increases the power density of devices, giving rise to the concept of dark silicon where only a portion of the functionality available on a chip can be active at any one time. Core fault tolerance and management of dark silicon can both be achieved by allocating a percentage of cores to be idle at any one time. Idle cores can be used as dark silicon to evenly distribute heat generated by processing cores and can also be used as spare cores to implement fault tolerance. Both of these can be achieved by the dynamic allocation of processes to tasks in response to changes to the status of hardware resources and the demands placed on the system, which in turn requires real time task mapping. This research proposes the use of a continuous fault/recovery cycle to implement graceful degradation and amelioration to provide real-time fault tolerance. Objective measures for core fault tolerance, link fault tolerance, network power and excess traffic have been developed for use by a multi-objective evolutionary algorithm that uses knowledge of the processing demands and hardware status to identify optimal task mappings. The fault/recovery cycle is shown to be effective in maintaining a high level of performance of a many-core array when presented with a series of hardware faults

    Adaptive Computing Systems for Aerospace

    Get PDF
    RÉSUMÉ En raison de leur complexité croissante, les systèmes informatiques modernes nécessitent de nouvelles méthodologies permettant d’automatiser leur conception et d’améliorer leurs performances. L’espace, en particulier, constitue un environnement très défavorable au maintien de la performance de ces systèmes : sans protection des rayonnements ionisants et des particules, l’électronique basée sur CMOS peut subir des erreurs transitoires, une dégradation des performances et une usure accélérée causant ultimement une défaillance du système. Les approches traditionnellement adoptees pour garantir la fiabilité du système et prolonger sa durée de vie sont basées sur la redondance, généralement établie durant la conception. En revanche, ces solutions sont coûteuses et parfois inefficaces, puisqu'elles augmentent la taille et la complexité du système, l'exposant à des risques plus élevés de surchauffe et d'erreurs. Les conséquences de ces limites sont d'autant plus importantes lorsqu'elles s’appliquent aux systèmes critiques (e.g., contraintes par le temps ou dont l’accès est limité) qui doivent être en mesure de prendre des décisions sans intervention humaine. Sur la base de ces besoins et limites, le développement en aérospatial de systèmes informatiques avec capacités adaptatives peut être considéré comme la solution la plus appropriée pour les dispositifs intégrés à haute performance. L’informatique auto-adaptative offre un potentiel sans égal pour assurer la création d’une génération d’ordinateurs plus intelligents et fiables. Qui plus est, elle répond aux besoins modernes de concevoir et programmer des systèmes informatiques capables de répondre à des objectifs en conflit. En nous inspirant des domaines de l’intelligence artificielle et des systèmes reconfigurables, nous aspirons à développer des systèmes informatiques auto-adaptatifs pour l’aérospatiale qui répondent aux enjeux et besoins actuels. Notre objectif est d’améliorer l’efficacité de ces systèmes, leur tolerance aux pannes et leur capacité de calcul. Afin d’atteindre cet objectif, une analyse expérimentale et comparative des algorithmes les plus populaires pour l’exploration multi-objectifs de l’espace de conception est d’abord effectuée. Les algorithmes ont été recueillis suite à une revue de la plus récente littérature et comprennent des méthodes heuristiques, évolutives et statistiques. L’analyse et la comparaison de ceux-ci permettent de cerner les forces et limites de chacun et d'ainsi définir des lignes directrices favorisant un choix optimal d’algorithmes d’exploration. Pour la création d’un système d’optimisation autonome—permettant le compromis entre plusieurs objectifs—nous exploitons les capacités des modèles graphiques probabilistes. Nous introduisons une méthodologie basée sur les modèles de Markov cachés dynamiques, laquelle permet d’équilibrer la disponibilité et la durée de vie d’un système multiprocesseur. Ceci est obtenu en estimant l'occurrence des erreurs permanentes parmi les erreurs transitoires et en migrant dynamiquement le calcul sur les ressources supplémentaires en cas de défaillance. La nature dynamique du modèle rend celui-ci adaptable à différents profils de mission et taux d’erreur. Les résultats montrent que nous sommes en mesure de prolonger la durée de vie du système tout en conservant une disponibilité proche du cas idéal. En raison des contraintes de temps rigoureuses imposées par les systèmes aérospatiaux, nous étudions aussi l’optimisation de la tolérance aux pannes en présence d'exigences d’exécution en temps réel. Nous proposons une méthodologie pour améliorer la fiabilité du calcul en présence d’erreurs transitoires pour les tâches en temps réel d’un système multiprocesseur homogène avec des capacités de réglage de tension et de fréquence. Dans ce cadre, nous définissons un nouveau compromis probabiliste entre la consommation d’énergie et la tolérance aux erreurs. Comme nous reconnaissons que la résilience est une propriété d’intérêt omniprésente (par exemple, pour la conception et l’analyse de systems complexes génériques), nous adaptons une définition formelle de celle-ci à un cadre probabiliste dérivé à nouveau de modèles de Markov cachés. Ce cadre nous permet de modéliser de façon réaliste l’évolution stochastique et l’observabilité partielle des phénomènes du monde réel. Nous proposons un algorithme permettant le calcul exact efficace de l’étape essentielle d’inférence laquelle est requise pour vérifier des propriétés génériques. Pour démontrer la flexibilité de cette approche, nous la validons, entre autres, dans le contexte d’un système informatisé reconfigurable pour l’aérospatiale. Enfin, nous étendons la portée de nos recherches vers la robotique et les systèmes multi-agents, deux sujets dont la popularité est croissante en exploration spatiale. Nous abordons le problème de l’évaluation et de l’entretien de la connectivité dans le context distribué et auto-adaptatif de la robotique en essaim. Nous examinons les limites des solutions existantes et proposons une nouvelle méthodologie pour créer des géométries complexes connectées gérant plusieurs tâches simultanément. Des contributions additionnelles dans plusieurs domaines sont résumés dans les annexes, nommément : (i) la conception de CubeSats, (ii) la modélisation des rayonnements spatiaux pour l’injection d’erreur dans FPGA et (iii) l’analyse temporelle probabiliste pour les systèmes en temps réel. À notre avis, cette recherche constitue un tremplin utile vers la création d’une nouvelle génération de systèmes informatiques qui exécutent leurs tâches d’une façon autonome et fiable, favorisant une exploration spatiale plus simple et moins coûteuse.----------ABSTRACT Today's computer systems are growing more and more complex at a pace that requires the development of novel and more effective methodologies to automate their design. Space, in particular, represents a challenging environment: without protection from ionizing and particle radiation, CMOS-based electronics are subject to transients faults, performance degradation, accelerated wear, and, ultimately, system failure. Traditional approaches adopted to guarantee reliability and extended lifetime are based on redundancy that is established at design-time. These solutions are expensive and sometimes inefficient, as they increase the complexity and size of a system, exposing it to higher risks of overheating and incurring in radiation-induced errors. Moreover, critical systems---e.g., time-constrained ones and those where access is limited---must be able to cope with pivotal situations without relying on human intervention. Hence, the emerging interest in computer systems with adaptive capabilities as the most suitable solution for novel high-performance embedded devices for aerospace. Self-adaptive computing carries unmatched potential and great promises for the creation of a new generation of smart, more reliable computers, and it addresses the challenge of designing and programming modern and future computer systems that must meet conflicting goals. Drawing from the fields of artificial intelligence and reconfigurable systems, we aim at developing self-adaptive computer systems for aerospace. Our goal is to improve their efficiency, fault-tolerance, and computational capabilities. The first step in this research is the experimental analysis of the most popular multi-objective design-space exploration algorithms for high-level design. These algorithms were collected from the recent literature and include heuristic, evolutionary, and statistical methods. Their comparison provides insights that we use to define guidelines for the choice of the most appropriate optimization algorithms, given the features of the design space. For the creation of a self-managing optimization framework---enabling the adaptive trade-off of multiple objectives---we leverage the tools of probabilistic graphical models. We introduce a mechanism based on dynamic hidden Markov models that balances the availability and lifetime of multiprocessor systems. This is achieved by estimating the occurrence of permanent faults amid transient faults, and by dynamically migrating the computation on excess resources, when failure occurs. The dynamic nature of the model makes it adjustable to different mission profiles and fault rates. The results show that we are able to lead systems to extended lifetimes, while keeping their availability close to ideal. On account of the stringent timing constraints imposed by aerospace systems, we then investigate the optimization of fault-tolerance under real-time requirements. We propose a methodology to improve the reliability of computation in the presence of transient errors when considering the mapping of real-time tasks on a homogeneous multiprocessor system with voltage and frequency scaling capabilities. In this framework, we take advantage of probability theory to define a novel trade-off between power consumption and fault-tolerance. As we recognize that resilience is a pervasive property of interest (e.g., for the design and analysis of generic complex systems), we adapt a formal definition of it to one more probabilistic framework derived from hidden Markov models. This allows us to realistically model the stochastic evolution and partial observability of complex real-world environments. Within this framework, we propose an efficient algorithm for the exact computation of the essential inference step required to construct generic property checking. To demonstrate the flexibility of this approach, we validate it in the context, among others, of a self-aware, reconfigurable computing system for aerospace. Finally, we move the scope of our research towards robotics and multi-agent systems: a topic of thriving popularity for space exploration. We tackle the problem of connectivity assessment and maintenance in the distributed and self-adaptive context of swarm robotics. We review the limitations of existing solutions and propose a novel methodology to create connected complex geometries for multiple task coverage. Additional contributions in the areas of (i) CubeSat design, (ii) the modelling of space radiation for FPGA fault-injection, and (iii) probabilistic timing analysis for real-time systems are summarized in the appendices. In the author's opinion, this research provides a number of useful stepping stones for the creation of a new generation of computing systems that autonomously---and reliably---perform their tasks for longer periods of time, fostering simpler and cheaper space exploration

    Secure smart contract-enabled control of battery energy storage systems against cyber-attacks

    Get PDF
    Battery Energy Storage Systems (BESSs) are an integral part of a sustainable and resilient smart grid. The security of such critical cyber-physical infrastructure is considered as a major priority for both industry and academia. In this paper, we propose a new distributed smart-contract based control approach of BESSs to enable collaborative and secure operations among them. We present a comprehensive discussion on how control strategies can be implemented as smart contracts and deployed on a distributed network of BESSs nodes in order to operate these storage systems according to secure consensus. To verify the effectiveness of the proposed method, we analyze the vulnerabilities of BESSs when controlled according to traditional schemes vs. smart-contract enabled control. Simulation results show that if individual BESSs achieve a certain maximum threshold of exploitability, then the network of distributed BESSs is more robust to cyber-attacks in smart contract-defined control. - 2019 Faculty of Engineering, Alexandria UniversityThe publication of this article was funded by the Qatar National Library.Scopu

    A model-based approach to System of Systems risk management

    Get PDF
    The failure of many System of Systems (SoS) enterprises can be attributed to the inappropriate application of traditional Systems Engineering (SE) processes within the SoS domain, because of the mistaken belief that a SoS can be regarded as a single large, or complex, system. SoS Engineering (SoSE) is a sub-discipline of SE; Risk Management and Modelling and Simulation (M&S) are key areas within SoSE, both of which also lie within the traditional SE domain. Risk Management of SoS requires a different approach to that currently taken for individual systems; if risk is managed for each component system then it cannot be assumed that the aggregated affect will be to mitigate risk at the SoS level. A literature review was undertaken examining three themes: (1) SoS Engineering (SoSE), (2) M&S and (3) Risk. Theme 1 of the literature provided insight into the activities comprising SoSE and its difference from traditional SE with risk management identified as a key activity. The second theme discussed the application of M&S to SoS, providing an output, which supported the identification of appropriate techniques and concluding that, the inherent complexity of a SoS required the use of M&S in order to support SoSE activities. Current risk management approaches were reviewed in theme 3 as well as the management of SoS risk. Although some specific examples of the management of SoS risk were found, no mature, general approach was identified, indicating a gap in current knowledge. However, it was noted most of these examples were underpinned by M&S approaches. It was therefore concluded a general approach SoS risk management utilising M&S methods would be of benefit. In order to fill the gap identified in current knowledge, this research proposed a new model based approach to Risk Management where risk identification was supported by a framework, which combined SoS system of interest dimensions with holistic risk types, where the resulting risks and contributing factors are captured in a causal network. Analysis of the causal network using a model technique selection tool, developed as part of this research, allowed the causal network to be simplified through the replacement of groups of elements within the network by appropriate supporting models. The Bayesian Belief Network (BBN) was identified as a suitable method to represent SoS risk. Supporting models run in Monte Carlo Simulations allowed data to be generated from which the risk BBNs could learn, thereby providing a more quantitative approach to SoS risk management. A method was developed which provided context to the BBN risk output through comparison with worst and best-case risk probabilities. The model based approach to Risk Management was applied to two very different case studies: Close Air Support mission planning and the Wheat Supply Chain, UK National Food Security risks, demonstrating its effectiveness and adaptability. The research established that the SoS SoI is essential for effective SoS risk identification and analysis of risk transfer, effective SoS modelling requires a range of techniques where suitability is determined by the problem context, the responsibility for SoS Risk Management is related to the overall SoS classification and the model based approach to SoS risk management was effective for both application case studies

    3rd EGEE User Forum

    Get PDF
    We have organized this book in a sequence of chapters, each chapter associated with an application or technical theme introduced by an overview of the contents, and a summary of the main conclusions coming from the Forum for the chapter topic. The first chapter gathers all the plenary session keynote addresses, and following this there is a sequence of chapters covering the application flavoured sessions. These are followed by chapters with the flavour of Computer Science and Grid Technology. The final chapter covers the important number of practical demonstrations and posters exhibited at the Forum. Much of the work presented has a direct link to specific areas of Science, and so we have created a Science Index, presented below. In addition, at the end of this book, we provide a complete list of the institutes and countries involved in the User Forum

    Implementation of Sensors and Artificial Intelligence for Environmental Hazards Assessment in Urban, Agriculture and Forestry Systems

    Get PDF
    The implementation of artificial intelligence (AI), together with robotics, sensors, sensor networks, Internet of Things (IoT), and machine/deep learning modeling, has reached the forefront of research activities, moving towards the goal of increasing the efficiency in a multitude of applications and purposes related to environmental sciences. The development and deployment of AI tools requires specific considerations, approaches, and methodologies for their effective and accurate applications. This Special Issue focused on the applications of AI to environmental systems related to hazard assessment in urban, agriculture, and forestry areas
    • …
    corecore