116 research outputs found

    Evolving Networks and Social Network Analysis Methods and Techniques

    Get PDF
    Evolving networks by definition are networks that change as a function of time. They are a natural extension of network science since almost all real-world networks evolve over time, either by adding or by removing nodes or links over time: elementary actor-level network measures like network centrality change as a function of time, popularity and influence of individuals grow or fade depending on processes, and events occur in networks during time intervals. Other problems such as network-level statistics computation, link prediction, community detection, and visualization gain additional research importance when applied to dynamic online social networks (OSNs). Due to their temporal dimension, rapid growth of users, velocity of changes in networks, and amount of data that these OSNs generate, effective and efficient methods and techniques for small static networks are now required to scale and deal with the temporal dimension in case of streaming settings. This chapter reviews the state of the art in selected aspects of evolving social networks presenting open research challenges related to OSNs. The challenges suggest that significant further research is required in evolving social networks, i.e., existent methods, techniques, and algorithms must be rethought and designed toward incremental and dynamic versions that allow the efficient analysis of evolving networks

    Structural engineering of evolving complex dynamical networks

    Get PDF
    Networks are ubiquitous in nature and many natural and man-made systems can be modelled as networked systems. Complex networks, systems comprising a number of nodes that are connected through edges, have been frequently used to model large-scale systems from various disciplines such as biology, ecology, and engineering. Dynamical systems interacting through a network may exhibit collective behaviours such as synchronisation, consensus, opinion formation, flocking and unusual phase transitions. Evolution of such collective behaviours is highly dependent on the structure of the interaction network. Optimisation of network topology to improve collective behaviours and network robustness can be achieved by intelligently modifying the network structure. Here, it is referred to as "Engineering of the Network". Although coupled dynamical systems can develop spontaneous synchronous patterns if their coupling strength lies in an appropriate range, in some applications one needs to control a fraction of nodes, known as driver nodes, in order to facilitate the synchrony. This thesis addresses the problem of identifying the set of best drivers, leading to the best pinning control performance. The eigen-ratio of the augmented Laplacian matrix, that is the largest eigenvalue divided by the second smallest one, is chosen as the controllability metric. The approach introduced in this thesis is to obtain the set of optimal drivers based on sensitivity analysis of the eigen-ratio, which requires only a single computation of the eigenvector associated with the largest eigenvalue, and thus is applicable for large-scale networks. This leads to a new "controllability centrality" metric for each subset of nodes. Simulation results reveal the effectiveness of the proposed metric in predicting the most important driver(s) correctly.     Interactions in complex networks might also facilitate the propagation of undesired effects, such as node/edge failure, which may crucially affect the performance of collective behaviours. In order to study the effect of node failure on network synchronisation, an analytical metric is proposed that measures the effect of a node removal on any desired eigenvalue of the Laplacian matrix. Using this metric, which is based on the local multiplicity of each eigenvalue at each node, one can approximate the impact of any node removal on the spectrum of a graph. The metric is computationally efficient as it only needs a single eigen-decomposition of the Laplacian matrix. It also provides a reliable approximation for the "Laplacian energy" of a network. Simulation results verify the accuracy of this metric in networks with different topologies. This thesis also considers formation control as an application of network synchronisation and studies the "rigidity maintenance" problem, which is one of the major challenges in this field. This problem is to preserve the rigidity of the sensing graph in a formation during motion, taking into consideration constraints such as line-of-sight requirements, sensing ranges and power limitations. By introducing a "Lattice of Configurations" for each node, a distributed rigidity maintenance algorithm is proposed to preserve the rigidity of the sensing network when failure in a sensing link would result in loss of rigidity. The proposed algorithm recovers rigidity by activating, almost always, the minimum number of new sensing links and considers real-time constraints of practical formations. A sufficient condition for this problem is proved and tested via numerical simulations. Based on the above results, a number of other areas and applications of network dynamics are studied and expounded upon in this thesis

    mapping the landscape of climate services

    Get PDF
    Climate services are technology-intensive, science-based and user-tailored tools providing timely climate information to a wide set of users. They accelerate innovation, while contributing to societal adaptation. Research has explored the advancements of climate services in multiple fields, producing a wealth of interdisciplinary knowledge ranging from climatology to the social sciences. The aim of this paper is to map the global landscape of research on climate services and to identify patterns at individual, affiliation and country level and the structural properties of each community. We use a sample of 358 records published between 1974 and 2018 and quantitatively analyze them. We provide insights into the main characteristics of the community of climate services through Bibliometrics and complement these findings with Network Science. We have computed the centrality of each actor as derived from a Principal Component Analysis of 42 different measures. By exploring the structural properties of the networks of individuals, institutions and countries we derive implications on the most central agents. Furthermore, we detect brokers in the network, capable of facilitating the information flow and increasing the cohesion of the community. We finally analyze the abstracts of the sample via Content Analysis. We find a progressive shift towards climate adaptation and user-centric visions. Agriculture and Energy are the top mentioned sectors. Anglophone countries and institutions are quantitatively dominant, and they are also important in connecting different discipline of the network of scholars, by building on established partnerships. Finding that nodes facilitating the diffusion of information flows (the brokers) are not necessarily the most central, but have a high degree of interdisciplinarity facilitating interactions of different communities. Social media abstract. #WhoisWho in #climateservices? A comprehensive map of research in #Europe and beyon

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    MAS-based Distributed Coordinated Control and Optimization in Microgrid and Microgrid Clusters:A Comprehensive Overview

    Get PDF

    Context Awareness in Swarm Systems

    Full text link
    Recent swarms of Uncrewed Systems (UxS) require substantial human input to support their operation. The little 'intelligence' on these platforms limits their potential value and increases their overall cost. Artificial Intelligence (AI) solutions are needed to allow a single human to guide swarms of larger sizes. Shepherding is a bio-inspired swarm guidance approach with one or a few sheepdogs guiding a larger number of sheep. By designing AI-agents playing the role of sheepdogs, humans can guide the swarm by using these AI agents in the same manner that a farmer uses biological sheepdogs to muster sheep. A context-aware AI-sheepdog offers human operators a smarter command and control system. It overcomes the current limiting assumption in the literature of swarm homogeneity to manage heterogeneous swarms and allows the AI agents to better team with human operators. This thesis aims to demonstrate the use of an ontology-guided architecture to deliver enhanced contextual awareness for swarm control agents. The proposed architecture increases the contextual awareness of AI-sheepdogs to improve swarm guidance and control, enabling individual and collective UxS to characterise and respond to ambiguous swarm behavioural patterns. The architecture, associated methods, and algorithms advance the swarm literature by allowing improved contextual awareness to guide heterogeneous swarms. Metrics and methods are developed to identify the sources of influence in the swarm, recognise and discriminate the behavioural traits of heterogeneous influencing agents, and design AI algorithms to recognise activities and behaviours. The proposed contributions will enable the next generation of UxS with higher levels of autonomy to generate more effective Human-Swarm Teams (HSTs)

    Adaptive Computing Systems for Aerospace

    Get PDF
    RÉSUMÉ En raison de leur complexitĂ© croissante, les systĂšmes informatiques modernes nĂ©cessitent de nouvelles mĂ©thodologies permettant d’automatiser leur conception et d’amĂ©liorer leurs performances. L’espace, en particulier, constitue un environnement trĂšs dĂ©favorable au maintien de la performance de ces systĂšmes : sans protection des rayonnements ionisants et des particules, l’électronique basĂ©e sur CMOS peut subir des erreurs transitoires, une dĂ©gradation des performances et une usure accĂ©lĂ©rĂ©e causant ultimement une dĂ©faillance du systĂšme. Les approches traditionnellement adoptees pour garantir la fiabilitĂ© du systĂšme et prolonger sa durĂ©e de vie sont basĂ©es sur la redondance, gĂ©nĂ©ralement Ă©tablie durant la conception. En revanche, ces solutions sont coĂ»teuses et parfois inefficaces, puisqu'elles augmentent la taille et la complexitĂ© du systĂšme, l'exposant Ă  des risques plus Ă©levĂ©s de surchauffe et d'erreurs. Les consĂ©quences de ces limites sont d'autant plus importantes lorsqu'elles s’appliquent aux systĂšmes critiques (e.g., contraintes par le temps ou dont l’accĂšs est limitĂ©) qui doivent ĂȘtre en mesure de prendre des dĂ©cisions sans intervention humaine. Sur la base de ces besoins et limites, le dĂ©veloppement en aĂ©rospatial de systĂšmes informatiques avec capacitĂ©s adaptatives peut ĂȘtre considĂ©rĂ© comme la solution la plus appropriĂ©e pour les dispositifs intĂ©grĂ©s Ă  haute performance. L’informatique auto-adaptative offre un potentiel sans Ă©gal pour assurer la crĂ©ation d’une gĂ©nĂ©ration d’ordinateurs plus intelligents et fiables. Qui plus est, elle rĂ©pond aux besoins modernes de concevoir et programmer des systĂšmes informatiques capables de rĂ©pondre Ă  des objectifs en conflit. En nous inspirant des domaines de l’intelligence artificielle et des systĂšmes reconfigurables, nous aspirons Ă  dĂ©velopper des systĂšmes informatiques auto-adaptatifs pour l’aĂ©rospatiale qui rĂ©pondent aux enjeux et besoins actuels. Notre objectif est d’amĂ©liorer l’efficacitĂ© de ces systĂšmes, leur tolerance aux pannes et leur capacitĂ© de calcul. Afin d’atteindre cet objectif, une analyse expĂ©rimentale et comparative des algorithmes les plus populaires pour l’exploration multi-objectifs de l’espace de conception est d’abord effectuĂ©e. Les algorithmes ont Ă©tĂ© recueillis suite Ă  une revue de la plus rĂ©cente littĂ©rature et comprennent des mĂ©thodes heuristiques, Ă©volutives et statistiques. L’analyse et la comparaison de ceux-ci permettent de cerner les forces et limites de chacun et d'ainsi dĂ©finir des lignes directrices favorisant un choix optimal d’algorithmes d’exploration. Pour la crĂ©ation d’un systĂšme d’optimisation autonome—permettant le compromis entre plusieurs objectifs—nous exploitons les capacitĂ©s des modĂšles graphiques probabilistes. Nous introduisons une mĂ©thodologie basĂ©e sur les modĂšles de Markov cachĂ©s dynamiques, laquelle permet d’équilibrer la disponibilitĂ© et la durĂ©e de vie d’un systĂšme multiprocesseur. Ceci est obtenu en estimant l'occurrence des erreurs permanentes parmi les erreurs transitoires et en migrant dynamiquement le calcul sur les ressources supplĂ©mentaires en cas de dĂ©faillance. La nature dynamique du modĂšle rend celui-ci adaptable Ă  diffĂ©rents profils de mission et taux d’erreur. Les rĂ©sultats montrent que nous sommes en mesure de prolonger la durĂ©e de vie du systĂšme tout en conservant une disponibilitĂ© proche du cas idĂ©al. En raison des contraintes de temps rigoureuses imposĂ©es par les systĂšmes aĂ©rospatiaux, nous Ă©tudions aussi l’optimisation de la tolĂ©rance aux pannes en prĂ©sence d'exigences d’exĂ©cution en temps rĂ©el. Nous proposons une mĂ©thodologie pour amĂ©liorer la fiabilitĂ© du calcul en prĂ©sence d’erreurs transitoires pour les tĂąches en temps rĂ©el d’un systĂšme multiprocesseur homogĂšne avec des capacitĂ©s de rĂ©glage de tension et de frĂ©quence. Dans ce cadre, nous dĂ©finissons un nouveau compromis probabiliste entre la consommation d’énergie et la tolĂ©rance aux erreurs. Comme nous reconnaissons que la rĂ©silience est une propriĂ©tĂ© d’intĂ©rĂȘt omniprĂ©sente (par exemple, pour la conception et l’analyse de systems complexes gĂ©nĂ©riques), nous adaptons une dĂ©finition formelle de celle-ci Ă  un cadre probabiliste dĂ©rivĂ© Ă  nouveau de modĂšles de Markov cachĂ©s. Ce cadre nous permet de modĂ©liser de façon rĂ©aliste l’évolution stochastique et l’observabilitĂ© partielle des phĂ©nomĂšnes du monde rĂ©el. Nous proposons un algorithme permettant le calcul exact efficace de l’étape essentielle d’infĂ©rence laquelle est requise pour vĂ©rifier des propriĂ©tĂ©s gĂ©nĂ©riques. Pour dĂ©montrer la flexibilitĂ© de cette approche, nous la validons, entre autres, dans le contexte d’un systĂšme informatisĂ© reconfigurable pour l’aĂ©rospatiale. Enfin, nous Ă©tendons la portĂ©e de nos recherches vers la robotique et les systĂšmes multi-agents, deux sujets dont la popularitĂ© est croissante en exploration spatiale. Nous abordons le problĂšme de l’évaluation et de l’entretien de la connectivitĂ© dans le context distribuĂ© et auto-adaptatif de la robotique en essaim. Nous examinons les limites des solutions existantes et proposons une nouvelle mĂ©thodologie pour crĂ©er des gĂ©omĂ©tries complexes connectĂ©es gĂ©rant plusieurs tĂąches simultanĂ©ment. Des contributions additionnelles dans plusieurs domaines sont rĂ©sumĂ©s dans les annexes, nommĂ©ment : (i) la conception de CubeSats, (ii) la modĂ©lisation des rayonnements spatiaux pour l’injection d’erreur dans FPGA et (iii) l’analyse temporelle probabiliste pour les systĂšmes en temps rĂ©el. À notre avis, cette recherche constitue un tremplin utile vers la crĂ©ation d’une nouvelle gĂ©nĂ©ration de systĂšmes informatiques qui exĂ©cutent leurs tĂąches d’une façon autonome et fiable, favorisant une exploration spatiale plus simple et moins coĂ»teuse.----------ABSTRACT Today's computer systems are growing more and more complex at a pace that requires the development of novel and more effective methodologies to automate their design. Space, in particular, represents a challenging environment: without protection from ionizing and particle radiation, CMOS-based electronics are subject to transients faults, performance degradation, accelerated wear, and, ultimately, system failure. Traditional approaches adopted to guarantee reliability and extended lifetime are based on redundancy that is established at design-time. These solutions are expensive and sometimes inefficient, as they increase the complexity and size of a system, exposing it to higher risks of overheating and incurring in radiation-induced errors. Moreover, critical systems---e.g., time-constrained ones and those where access is limited---must be able to cope with pivotal situations without relying on human intervention. Hence, the emerging interest in computer systems with adaptive capabilities as the most suitable solution for novel high-performance embedded devices for aerospace. Self-adaptive computing carries unmatched potential and great promises for the creation of a new generation of smart, more reliable computers, and it addresses the challenge of designing and programming modern and future computer systems that must meet conflicting goals. Drawing from the fields of artificial intelligence and reconfigurable systems, we aim at developing self-adaptive computer systems for aerospace. Our goal is to improve their efficiency, fault-tolerance, and computational capabilities. The first step in this research is the experimental analysis of the most popular multi-objective design-space exploration algorithms for high-level design. These algorithms were collected from the recent literature and include heuristic, evolutionary, and statistical methods. Their comparison provides insights that we use to define guidelines for the choice of the most appropriate optimization algorithms, given the features of the design space. For the creation of a self-managing optimization framework---enabling the adaptive trade-off of multiple objectives---we leverage the tools of probabilistic graphical models. We introduce a mechanism based on dynamic hidden Markov models that balances the availability and lifetime of multiprocessor systems. This is achieved by estimating the occurrence of permanent faults amid transient faults, and by dynamically migrating the computation on excess resources, when failure occurs. The dynamic nature of the model makes it adjustable to different mission profiles and fault rates. The results show that we are able to lead systems to extended lifetimes, while keeping their availability close to ideal. On account of the stringent timing constraints imposed by aerospace systems, we then investigate the optimization of fault-tolerance under real-time requirements. We propose a methodology to improve the reliability of computation in the presence of transient errors when considering the mapping of real-time tasks on a homogeneous multiprocessor system with voltage and frequency scaling capabilities. In this framework, we take advantage of probability theory to define a novel trade-off between power consumption and fault-tolerance. As we recognize that resilience is a pervasive property of interest (e.g., for the design and analysis of generic complex systems), we adapt a formal definition of it to one more probabilistic framework derived from hidden Markov models. This allows us to realistically model the stochastic evolution and partial observability of complex real-world environments. Within this framework, we propose an efficient algorithm for the exact computation of the essential inference step required to construct generic property checking. To demonstrate the flexibility of this approach, we validate it in the context, among others, of a self-aware, reconfigurable computing system for aerospace. Finally, we move the scope of our research towards robotics and multi-agent systems: a topic of thriving popularity for space exploration. We tackle the problem of connectivity assessment and maintenance in the distributed and self-adaptive context of swarm robotics. We review the limitations of existing solutions and propose a novel methodology to create connected complex geometries for multiple task coverage. Additional contributions in the areas of (i) CubeSat design, (ii) the modelling of space radiation for FPGA fault-injection, and (iii) probabilistic timing analysis for real-time systems are summarized in the appendices. In the author's opinion, this research provides a number of useful stepping stones for the creation of a new generation of computing systems that autonomously---and reliably---perform their tasks for longer periods of time, fostering simpler and cheaper space exploration

    Mind the Gap: Developments in Autonomous Driving Research and the Sustainability Challenge

    Get PDF
    Scientific knowledge on autonomous-driving technology is expanding at a faster-than-ever pace. As a result, the likelihood of incurring information overload is particularly notable for researchers, who can struggle to overcome the gap between information processing requirements and information processing capacity. We address this issue by adopting a multi-granulation approach to latent knowledge discovery and synthesis in large-scale research domains. The proposed methodology combines citation-based community detection methods and topic modeling techniques to give a concise but comprehensive overview of how the autonomous vehicle (AV) research field is conceptually structured. Thirteen core thematic areas are extracted and presented by mining the large data-rich environments resulting from 50 years of AV research. The analysis demonstrates that this research field is strongly oriented towards examining the technological developments needed to enable the widespread rollout of AVs, whereas it largely overlooks the wide-ranging sustainability implications of this sociotechnical transition. On account of these findings, we call for a broader engagement of AV researchers with the sustainability concept and we invite them to increase their commitment to conducting systematic investigations into the sustainability of AV deployment. Sustainability research is urgently required to produce an evidence-based understanding of what new sociotechnical arrangements are needed to ensure that the systemic technological change introduced by AV-based transport systems can fulfill societal functions while meeting the urgent need for more sustainable transport solutions
    • 

    corecore