1,034 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Joint optimization for wireless sensor networks in critical infrastructures

    Get PDF
    Energy optimization represents one of the main goals in wireless sensor network design where a typical sensor node has usually operated by making use of the battery with limited-capacity. In this thesis, the following main problems are addressed: first, the joint optimization of the energy consumption and the delay for conventional wireless sensor networks is presented. Second, the joint optimization of the information quality and energy consumption of the wireless sensor networks based structural health monitoring is outlined. Finally, the multi-objectives optimization of the former problem under several constraints is shown. In the first main problem, the following points are presented: we introduce a joint multi-objective optimization formulation for both energy and delay for most sensor nodes in various applications. Then, we present the Karush-Kuhn-Tucker analysis to demonstrate the optimal solution for each formulation. We introduce a method of determining the knee on the Pareto front curve, which meets the network designer interest for focusing on more practical solutions. The sensor node placement optimization has a significant role in wireless sensor networks, especially in structural health monitoring. In the second main problem of this work, the existing work optimizes the node placement and routing separately (by performing routing after carrying out the node placement). However, this approach does not guarantee the optimality of the overall solution. A joint optimization of sensor placement, routing, and flow assignment is introduced and is solved using mixed-integer programming modelling. In the third main problem of this study, we revisit the placement problem in wireless sensor networks of structural health monitoring by using multi-objective optimization. Furthermore, we take into consideration more constraints that were not taken into account before. This includes the maximum capacity per link and the node-disjoint routing. Since maximum capacity constraint is essential to study the data delivery over limited-capacity wireless links, node-disjoint routing is necessary to achieve load balancing and longer wireless sensor networks lifetime. We list the results of the previous problems, and then we evaluate the corresponding results

    A Fog Computing Architecture for Disaster Response Networks

    Get PDF
    In the aftermath of a disaster, the impacted communication infrastructure is unable to provide first responders with a reliable medium of communication. Delay tolerant networks that leverage mobility in the area have been proposed as a scalable solution that can be deployed quickly. Such disaster response networks (DRNs) typically have limited capacity due to frequent disconnections in the network, and under-perform when saturated with data. On the other hand, there is a large amount of data being produced and consumed due to the recent popularity of smartphones and the cloud computing paradigm. Fog Computing brings the cloud computing paradigm to the complex environments that DRNs operate in. The proposed architecture addresses the key challenges of ensuring high situational awareness and energy efficiency when such DRNs are saturated with large amounts of data. Situational awareness is increased by providing data reliably, and at a high temporal and spatial resolution. A waypoint placement algorithm places hardware in the disaster struck area such that the aggregate good-put is maximized. The Raven routing framework allows for risk-averse data delivery by allowing the user to control the variance of the packet delivery delay. The Pareto frontier between performance and energy consumption is discovered, and the DRN is made to operate at these Pareto optimal points. The FuzLoc distributed protocol enables mobile self-localization in indoor environments. The architecture has been evaluated in realistic scenarios involving deployments of multiple vehicles and devices

    Multi-objective function-based node-disjoint multipath routing for mobile ad hoc networks

    Get PDF
    Funding Information: This work was supported Korea Environmental Industry & Technology Institute (KEITI) grant funded by the Korea government (Ministry of Environment). Project No. RE202101551, the development of IoT-based technology for collecting and managing Big data on environmental hazards and health effects.Peer reviewedPublisher PD

    Adaptive Computing Systems for Aerospace

    Get PDF
    RÉSUMÉ En raison de leur complexité croissante, les systèmes informatiques modernes nécessitent de nouvelles méthodologies permettant d’automatiser leur conception et d’améliorer leurs performances. L’espace, en particulier, constitue un environnement très défavorable au maintien de la performance de ces systèmes : sans protection des rayonnements ionisants et des particules, l’électronique basée sur CMOS peut subir des erreurs transitoires, une dégradation des performances et une usure accélérée causant ultimement une défaillance du système. Les approches traditionnellement adoptees pour garantir la fiabilité du système et prolonger sa durée de vie sont basées sur la redondance, généralement établie durant la conception. En revanche, ces solutions sont coûteuses et parfois inefficaces, puisqu'elles augmentent la taille et la complexité du système, l'exposant à des risques plus élevés de surchauffe et d'erreurs. Les conséquences de ces limites sont d'autant plus importantes lorsqu'elles s’appliquent aux systèmes critiques (e.g., contraintes par le temps ou dont l’accès est limité) qui doivent être en mesure de prendre des décisions sans intervention humaine. Sur la base de ces besoins et limites, le développement en aérospatial de systèmes informatiques avec capacités adaptatives peut être considéré comme la solution la plus appropriée pour les dispositifs intégrés à haute performance. L’informatique auto-adaptative offre un potentiel sans égal pour assurer la création d’une génération d’ordinateurs plus intelligents et fiables. Qui plus est, elle répond aux besoins modernes de concevoir et programmer des systèmes informatiques capables de répondre à des objectifs en conflit. En nous inspirant des domaines de l’intelligence artificielle et des systèmes reconfigurables, nous aspirons à développer des systèmes informatiques auto-adaptatifs pour l’aérospatiale qui répondent aux enjeux et besoins actuels. Notre objectif est d’améliorer l’efficacité de ces systèmes, leur tolerance aux pannes et leur capacité de calcul. Afin d’atteindre cet objectif, une analyse expérimentale et comparative des algorithmes les plus populaires pour l’exploration multi-objectifs de l’espace de conception est d’abord effectuée. Les algorithmes ont été recueillis suite à une revue de la plus récente littérature et comprennent des méthodes heuristiques, évolutives et statistiques. L’analyse et la comparaison de ceux-ci permettent de cerner les forces et limites de chacun et d'ainsi définir des lignes directrices favorisant un choix optimal d’algorithmes d’exploration. Pour la création d’un système d’optimisation autonome—permettant le compromis entre plusieurs objectifs—nous exploitons les capacités des modèles graphiques probabilistes. Nous introduisons une méthodologie basée sur les modèles de Markov cachés dynamiques, laquelle permet d’équilibrer la disponibilité et la durée de vie d’un système multiprocesseur. Ceci est obtenu en estimant l'occurrence des erreurs permanentes parmi les erreurs transitoires et en migrant dynamiquement le calcul sur les ressources supplémentaires en cas de défaillance. La nature dynamique du modèle rend celui-ci adaptable à différents profils de mission et taux d’erreur. Les résultats montrent que nous sommes en mesure de prolonger la durée de vie du système tout en conservant une disponibilité proche du cas idéal. En raison des contraintes de temps rigoureuses imposées par les systèmes aérospatiaux, nous étudions aussi l’optimisation de la tolérance aux pannes en présence d'exigences d’exécution en temps réel. Nous proposons une méthodologie pour améliorer la fiabilité du calcul en présence d’erreurs transitoires pour les tâches en temps réel d’un système multiprocesseur homogène avec des capacités de réglage de tension et de fréquence. Dans ce cadre, nous définissons un nouveau compromis probabiliste entre la consommation d’énergie et la tolérance aux erreurs. Comme nous reconnaissons que la résilience est une propriété d’intérêt omniprésente (par exemple, pour la conception et l’analyse de systems complexes génériques), nous adaptons une définition formelle de celle-ci à un cadre probabiliste dérivé à nouveau de modèles de Markov cachés. Ce cadre nous permet de modéliser de façon réaliste l’évolution stochastique et l’observabilité partielle des phénomènes du monde réel. Nous proposons un algorithme permettant le calcul exact efficace de l’étape essentielle d’inférence laquelle est requise pour vérifier des propriétés génériques. Pour démontrer la flexibilité de cette approche, nous la validons, entre autres, dans le contexte d’un système informatisé reconfigurable pour l’aérospatiale. Enfin, nous étendons la portée de nos recherches vers la robotique et les systèmes multi-agents, deux sujets dont la popularité est croissante en exploration spatiale. Nous abordons le problème de l’évaluation et de l’entretien de la connectivité dans le context distribué et auto-adaptatif de la robotique en essaim. Nous examinons les limites des solutions existantes et proposons une nouvelle méthodologie pour créer des géométries complexes connectées gérant plusieurs tâches simultanément. Des contributions additionnelles dans plusieurs domaines sont résumés dans les annexes, nommément : (i) la conception de CubeSats, (ii) la modélisation des rayonnements spatiaux pour l’injection d’erreur dans FPGA et (iii) l’analyse temporelle probabiliste pour les systèmes en temps réel. À notre avis, cette recherche constitue un tremplin utile vers la création d’une nouvelle génération de systèmes informatiques qui exécutent leurs tâches d’une façon autonome et fiable, favorisant une exploration spatiale plus simple et moins coûteuse.----------ABSTRACT Today's computer systems are growing more and more complex at a pace that requires the development of novel and more effective methodologies to automate their design. Space, in particular, represents a challenging environment: without protection from ionizing and particle radiation, CMOS-based electronics are subject to transients faults, performance degradation, accelerated wear, and, ultimately, system failure. Traditional approaches adopted to guarantee reliability and extended lifetime are based on redundancy that is established at design-time. These solutions are expensive and sometimes inefficient, as they increase the complexity and size of a system, exposing it to higher risks of overheating and incurring in radiation-induced errors. Moreover, critical systems---e.g., time-constrained ones and those where access is limited---must be able to cope with pivotal situations without relying on human intervention. Hence, the emerging interest in computer systems with adaptive capabilities as the most suitable solution for novel high-performance embedded devices for aerospace. Self-adaptive computing carries unmatched potential and great promises for the creation of a new generation of smart, more reliable computers, and it addresses the challenge of designing and programming modern and future computer systems that must meet conflicting goals. Drawing from the fields of artificial intelligence and reconfigurable systems, we aim at developing self-adaptive computer systems for aerospace. Our goal is to improve their efficiency, fault-tolerance, and computational capabilities. The first step in this research is the experimental analysis of the most popular multi-objective design-space exploration algorithms for high-level design. These algorithms were collected from the recent literature and include heuristic, evolutionary, and statistical methods. Their comparison provides insights that we use to define guidelines for the choice of the most appropriate optimization algorithms, given the features of the design space. For the creation of a self-managing optimization framework---enabling the adaptive trade-off of multiple objectives---we leverage the tools of probabilistic graphical models. We introduce a mechanism based on dynamic hidden Markov models that balances the availability and lifetime of multiprocessor systems. This is achieved by estimating the occurrence of permanent faults amid transient faults, and by dynamically migrating the computation on excess resources, when failure occurs. The dynamic nature of the model makes it adjustable to different mission profiles and fault rates. The results show that we are able to lead systems to extended lifetimes, while keeping their availability close to ideal. On account of the stringent timing constraints imposed by aerospace systems, we then investigate the optimization of fault-tolerance under real-time requirements. We propose a methodology to improve the reliability of computation in the presence of transient errors when considering the mapping of real-time tasks on a homogeneous multiprocessor system with voltage and frequency scaling capabilities. In this framework, we take advantage of probability theory to define a novel trade-off between power consumption and fault-tolerance. As we recognize that resilience is a pervasive property of interest (e.g., for the design and analysis of generic complex systems), we adapt a formal definition of it to one more probabilistic framework derived from hidden Markov models. This allows us to realistically model the stochastic evolution and partial observability of complex real-world environments. Within this framework, we propose an efficient algorithm for the exact computation of the essential inference step required to construct generic property checking. To demonstrate the flexibility of this approach, we validate it in the context, among others, of a self-aware, reconfigurable computing system for aerospace. Finally, we move the scope of our research towards robotics and multi-agent systems: a topic of thriving popularity for space exploration. We tackle the problem of connectivity assessment and maintenance in the distributed and self-adaptive context of swarm robotics. We review the limitations of existing solutions and propose a novel methodology to create connected complex geometries for multiple task coverage. Additional contributions in the areas of (i) CubeSat design, (ii) the modelling of space radiation for FPGA fault-injection, and (iii) probabilistic timing analysis for real-time systems are summarized in the appendices. In the author's opinion, this research provides a number of useful stepping stones for the creation of a new generation of computing systems that autonomously---and reliably---perform their tasks for longer periods of time, fostering simpler and cheaper space exploration

    Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey

    Full text link
    Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs

    Robotic Wireless Sensor Networks

    Full text link
    In this chapter, we present a literature survey of an emerging, cutting-edge, and multi-disciplinary field of research at the intersection of Robotics and Wireless Sensor Networks (WSN) which we refer to as Robotic Wireless Sensor Networks (RWSN). We define a RWSN as an autonomous networked multi-robot system that aims to achieve certain sensing goals while meeting and maintaining certain communication performance requirements, through cooperative control, learning and adaptation. While both of the component areas, i.e., Robotics and WSN, are very well-known and well-explored, there exist a whole set of new opportunities and research directions at the intersection of these two fields which are relatively or even completely unexplored. One such example would be the use of a set of robotic routers to set up a temporary communication path between a sender and a receiver that uses the controlled mobility to the advantage of packet routing. We find that there exist only a limited number of articles to be directly categorized as RWSN related works whereas there exist a range of articles in the robotics and the WSN literature that are also relevant to this new field of research. To connect the dots, we first identify the core problems and research trends related to RWSN such as connectivity, localization, routing, and robust flow of information. Next, we classify the existing research on RWSN as well as the relevant state-of-the-arts from robotics and WSN community according to the problems and trends identified in the first step. Lastly, we analyze what is missing in the existing literature, and identify topics that require more research attention in the future

    Uncertainty Quantification for Naval Ships and the Optimal Adaptation of Bridges to Climate Change

    Get PDF
    Repairing and adapting existing structures and infrastructure is essential for maintaining the functionality of a transportation network and the flow of people, goods, and ideas across a region. However, structures are vulnerable to extreme events, such as hurricanes and floods, and continuous deterioration, due to exposure to corrosive environments and cyclic loading. The occurrence of extreme events may be nonstationary over the service life of the structures, leading to uncertain future loading conditions on the structure. Continuous deterioration, due to corrosion or fatigue, changes the capacity of the structure to resist loads over time. Repair and adaptation measures may be applied to a structure in order to improve the capacity to resist loads. However, limited economic resources prohibit the immediate repair and adaptation of all structures, thus requiring a systematic methodology be established prioritizing actions. It is because of this need that the field of life-cycle management has emerged. The focus of the research in this dissertation is on enhancing this field and the ability of engineers to (1) quantify uncertainty in the life-cycle management problem, (2) assess the performance of structures and develop effective management strategies, and (3) integrate the uncertainties of climate changes and future loading conditions into the management of structures.Uncertainty quantification typically involves describing the variability in the loads acting on a structure, the capacity of the structure, and the deterioration over time of the structure. In the design phase, uncertainty quantification is based on observing loads in the area (traffic, wind, hydraulic loads, etc) and testing materials and connections to characterize their properties. In the operational phase, Structural Health Monitoring (SHM) data can be integrated into the uncertainty quantification process. This research specifically enhances the ability to integrate SHM data into the fatigue life prediction of ship structures and improve uncertainty quantification for naval ships.Life-cycle management integrates the quantifiable uncertainties into the performance assessment of a structure. For civil structures, hydraulic hazards like hurricanes, floods, and tsunamis may cause extensive damage; and failure may have major economic, societal, and environmental consequences. This research focuses on enhancing the performance assessment methodologies for evaluating the risk associated with the failure of riverine and coastal bridges once the uncertainties are known. The considerations for the multiple failure modes, as well as the multiple hazards, included in this research are shown to be essential when determining the risk level of bridges. Furthermore, this work includes proposed methodologies for determining optimal management strategies that are driven by both performance and cost in order to aid decision makers.The final thrust area of this research emanates from the uncertainties associated with anticipated climate changes. Natural and anthropogenic changes result in changes to sea level, the intensity of storms, and the intensity of precipitation which leave riverine and coastal bridges increasingly vulnerable. The uncertainties that govern the future variability in climate are currently reported as unquantifiable. This type of uncertainty is referred to as a deep uncertainty and stems from the multiple feasible projections for gas concentrations and the multiple available climate models with which to evaluate them. This research introduces a systematic decision support framework for determining adaptation strategies in the presence of both the deep uncertainties of climate change and the quantifiable uncertainties of structural performanc
    • …
    corecore