337 research outputs found

    Design and Performance Analysis of Genetic Algorithms for Topology Control Problems

    Full text link
    In this dissertation, we present a bio-inspired decentralized topology control mechanism, called force-based genetic algorithm (FGA), where a genetic algorithm (GA) is run by each autonomous mobile node to achieve a uniform spread of mobile nodes and to provide a fully connected network over an unknown area. We present a formal analysis of FGA in terms of convergence speed, uniformity at area coverage, and Lyapunov stability theorem. This dissertation emphasizes the use of mobile nodes to achieve a uniform distribution over an unknown terrain without a priori information and a central control unit. In contrast, each mobile node running our FGA has to make its own movement direction and speed decisions based on local neighborhood information, such as obstacles and the number of neighbors, without a centralized control unit or global knowledge. We have implemented simulation software in Java and developed four different testbeds to study the effectiveness of different GA-based topology control frameworks for network performance metrics including node density, speed, and the number of generations that GAs run. The stochastic behavior of FGA, like all GA-based approaches, makes it difficult to analyze its convergence speed. We built metrically transitive homogeneous and inhomogeneous Markov chain models to analyze the convergence of our FGA with respect to the communication ranges of mobile nodes and the total number of nodes in the system. The Dobrushin contraction coefficient of ergodicity is used for measuring convergence speed for homogeneous and inhomogeneous Markov chain models of our FGA. Furthermore, convergence characteristic analysis helps us to choose the nearoptimal values for communication range, the number of mobile nodes, and the mean node degree before sending autonomous mobile nodes to any mission. Our analytical and experimental results show that our FGA delivers promising results for uniform mobile node distribution over unknown terrains. Since our FGA adapts to local environment rapidly and does not require global network knowledge, it can be used as a real-time topology controller for commercial and military applications

    Swarm SLAM: Challenges and Perspectives

    Get PDF
    A robot swarm is a decentralized system characterized by locality of sensing and communication, self-organization, and redundancy. These characteristics allow robot swarms to achieve scalability, flexibility and fault tolerance, properties that are especially valuable in the context of simultaneous localization and mapping (SLAM), specifically in unknown environments that evolve over time. So far, research in SLAM has mainly focused on single- and centralized multi-robot systems—i.e., non-swarm systems. While these systems can produce accurate maps, they are typically not scalable, cannot easily adapt to unexpected changes in the environment, and are prone to failure in hostile environments. Swarm SLAM is a promising approach to SLAM as it could leverage the decentralized nature of a robot swarm and achieve scalable, flexible and fault-tolerant exploration and mapping. However, at the moment of writing, swarm SLAM is a rather novel idea and the field lacks definitions, frameworks, and results. In this work, we present the concept of swarm SLAM and its constraints, both from a technical and an economical point of view. In particular, we highlight the main challenges of swarm SLAM for gathering, sharing, and retrieving information. We also discuss the strengths and weaknesses of this approach against traditional multi-robot SLAM. We believe that swarm SLAM will be particularly useful to produce abstract maps such as topological or simple semantic maps and to operate under time or cost constraints

    Towards Reliable Robotics: from Navigation to Coordination

    Get PDF
    Les robots autonomes et les systèmes multi-robots ont connu un intérêt sans cesse croissant par les scientifiques et l’industrie. Plusieurs applications telles que les robots assistants, les robots gestionnaires de stock ainsi que les véhicules autonomes nécessitent des algorithmes de navigation et de coordination fiables pour permettre leur déploiement dans des environnements dynamiques et relativement méconnus. Ainsi, la capacité d’adaptation est une caractéristique fondamentale permettant une utilisation accrue et une intégration plus facile des systèmes multi-robots. Afin de posséder cette agilité d’adaptation, les robots devraient opter vers un comportement assez robuste avec une aptitude à réajuster leurs actions selon la cinématique de l’environnement. Ce mémoire de thèse, s’interesse aux problèmes de fiabilité lors du déploiement des systèmes multi-robots dans des environnements dynamiques et inconnus. Il s’articule autour de deux contributions majeures, à savoir : Un mécanisme de planification et de réajustement de mouvement quasi optimal qui roule à une fréquence allant jusqu’à 200 Hz. Ainsi qu’un framework de vérification de la robustesse des comportements coopératifs des systèmes multi-robots. La première contribution a été inspirée de l’habilité de quelques animaux à naviguer en se fiant au champ magnétique terrestre. En effet, nous avons constaté que le champ magnétique n’admet pas de maxima locaux, ce qui permet aux animaux de suivre son gradient. Par conséquent, un robot est capable de parcourir tout type d’environnements en faisant propager un champ magnétique virtuel et en suivant son gradient. Toutefois, la résolution des équations de Maxwell, qui décrivent la physique des champs magnétiques, est complexe et nécessitent des simulations numériques couteuses en termes de ressources et temps de calcul. Pour pallier cette difficulté, nous proposons un approximateur de la solution des équations de Maxwell basé sur un réseau de neurones profond entrainé exclusivement sur des solutions provenant de simulations numériques avancées. L’environnement est représenté par une carte de conductivité. Nous affectons une conductivité maximale à la destination du robot et une conductivité nulle aux obstacles. Le calcul de la distribution du champ magnétique virtuel permettra au robot de suivre le gradient qui le mènera vers sa destination selon un chemin quasi optimal.----------ABSTRACT: Autonomous robots and multi-robot systems are of growing interest for industry and academia. Many real-world applications such as assistive robotics, inventory management, and autonomous driving require reliable navigation and coordination algorithms that can be deployed in a partially unknown, dynamic environment. The ability to adapt is a key feature for the widespread use and societal integration of multi-robot systems. To achieve this adaptation ability, robots must implement inherently robust behaviors and must be sufficiently fast to re-plan their actions when their environment changes. This dissertation deals with the problem of reliably deploying a group of robots in a dynamic, unknown environment, and provides two key contributions: a mechanism for robots to plan and re-plan their motion near optimally up to 200 times per second; and a framework to verify the robustness of multi-robot cooperative behaviors. For the first contribution, observing how some animals are able to navigate using the Earth’s magnetic field, we realize that this is possible because the magnetic field has no local maxima, and animals can follow its gradient. This means that a robot can navigate any kind of environment by propagating a known virtual magnetic field and following its gradient. However, solving Maxwell’s equations–which govern the physics of magnetic fields– is complex and demands computationally costly numerical simulations. To overcome this problem, we propose a deep neural network as an approximator for Maxwell’s equations, exclusively trained on high-quality numerical simulations. We model the environment as a conductivity map with its maximum in a goal location and zero for obstacles. After computing the virtual field propagation, a robot can follow the virtual magnetic gradient to optimally reach the goal

    Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities

    Full text link
    Robotics and Artificial Intelligence (AI) have been inextricably intertwined since their inception. Today, AI-Robotics systems have become an integral part of our daily lives, from robotic vacuum cleaners to semi-autonomous cars. These systems are built upon three fundamental architectural elements: perception, navigation and planning, and control. However, while the integration of AI-Robotics systems has enhanced the quality our lives, it has also presented a serious problem - these systems are vulnerable to security attacks. The physical components, algorithms, and data that make up AI-Robotics systems can be exploited by malicious actors, potentially leading to dire consequences. Motivated by the need to address the security concerns in AI-Robotics systems, this paper presents a comprehensive survey and taxonomy across three dimensions: attack surfaces, ethical and legal concerns, and Human-Robot Interaction (HRI) security. Our goal is to provide users, developers and other stakeholders with a holistic understanding of these areas to enhance the overall AI-Robotics system security. We begin by surveying potential attack surfaces and provide mitigating defensive strategies. We then delve into ethical issues, such as dependency and psychological impact, as well as the legal concerns regarding accountability for these systems. Besides, emerging trends such as HRI are discussed, considering privacy, integrity, safety, trustworthiness, and explainability concerns. Finally, we present our vision for future research directions in this dynamic and promising field

    AutoFac: The Perpetual Robot Machine

    Get PDF
    Robotics currently lacks fully autonomous capabilities, especially where task knowledge is incomplete and optimal robotic solutions cannot be pre-engineered. The intersection of evolutionary robotics, artificial life and embodied artificial intelligence presents a promising paradigm for generating multitask problem-solvers suitable for adapting over extended periods in unexplored, remote and hazardous environments. To address the automation of evolving robotic systems, we propose fully autonomous, embodied artificial-life factories and laboratories, situated in various environments as multi-task problem-solvers. Such integrated factories and laboratories would be adaptive solution designers, producing fit-for-purpose physical robots with accelerated artificial evolution that experiment to continually discover new tasks. Such tasks would be stepping-stones towards accomplishing given mission objectives over extended periods (days to decades). Rather than being purely speculative, prerequisite technologies to realize such factories have been experimentally demonstrated. Currently, vast scientific and enterprise opportunities await in applications such as asteroid mining, terraforming, space and deep sea exploration, though no suitable solution exists. The proposed embodied artificial-life factories and laboratories, termed: AutoFac, use robot production equipment run by artificial evolution controllers to collect and synthesize environmental information (from robotic sensory systems). Such information is merged with current needs and mission objectives to create new robot embodiment and task definitions that are environmentally adapted and balance task-oriented behavior with exploration. AutoFac is thus generalist (deployable in many environments) but continually produces specialist solutions within such environments — a perpetual robot machine

    Data-Driven Architecture to Increase Resilience In Multi-Agent Coordinated Missions

    Get PDF
    The rise in the use of Multi-Agent Systems (MASs) in unpredictable and changing environments has created the need for intelligent algorithms to increase their autonomy, safety and performance in the event of disturbances and threats. MASs are attractive for their flexibility, which also makes them prone to threats that may result from hardware failures (actuators, sensors, onboard computer, power source) and operational abnormal conditions (weather, GPS denied location, cyber-attacks). This dissertation presents research on a bio-inspired approach for resilience augmentation in MASs in the presence of disturbances and threats such as communication link and stealthy zero-dynamics attacks. An adaptive bio-inspired architecture is developed for distributed consensus algorithms to increase fault-tolerance in a network of multiple high-order nonlinear systems under directed fixed topologies. In similarity with the natural organisms’ ability to recognize and remember specific pathogens to generate its immunity, the immunity-based architecture consists of a Distributed Model-Reference Adaptive Control (DMRAC) with an Artificial Immune System (AIS) adaptation law integrated within a consensus protocol. Feedback linearization is used to modify the high-order nonlinear model into four decoupled linear subsystems. A stability proof of the adaptation law is conducted using Lyapunov methods and Jordan decomposition. The DMRAC is proven to be stable in the presence of external time-varying bounded disturbances and the tracking error trajectories are shown to be bounded. The effectiveness of the proposed architecture is examined through numerical simulations. The proposed controller successfully ensures that consensus is achieved among all agents while the adaptive law v simultaneously rejects the disturbances in the agent and its neighbors. The architecture also includes a health management system to detect faulty agents within the global network. Further numerical simulations successfully test and show that the Global Health Monitoring (GHM) does effectively detect faults within the network

    Design of Environment Aware Planning Heuristics for Complex Navigation Objectives

    Get PDF
    A heuristic is the simplified approximations that helps guide a planner in deducing the best way to move forward. Heuristics are valued in many modern AI algorithms and decision-making architectures due to their ability to drastically reduce computation time. Particularly in robotics, path planning heuristics are widely leveraged to aid in navigation and exploration. As the robotic platform explores and navigates, information about the world can and should be used to augment and update the heuristic to guide solutions. Complex heuristics that can account for environmental factors, robot capabilities, and desired actions provide optimal results with little wasted exploration, but are computationally expensive. This thesis demonstrates results of research into simplifying heuristics that maintains the performance improvements from complicated heuristics. The research presented is validated on two complex robotic tasks: stealth planning and energy efficient planning. The stealth heuristic was created to inform a planner and allow a ground robot to navigate unknown environments in a less visible manner. Due to the highly uncertain nature of the world (where unknown observers exist) this heuristic implemented was instrumental to enabling the first high-uncertainty stealth planner. Heuristic guidance is further explored for use in energy efficient planning, where a machine learning approach is used to generate a heuristic measure. This thesis demonstrates effective learned heuristics that simplify convergence time and accounts for the complexities of environment. A reduction of 60% in required compute time for planning was found

    Ethical Control of Unmanned Systems: lifesaving/lethal scenarios for naval operations

    Get PDF
    Prepared for: Raytheon Missiles & Defense under NCRADA-NPS-19-0227This research in Ethical Control of Unmanned Systems applies precepts of Network Optional Warfare (NOW) to develop a three-step Mission Execution Ontology (MEO) methodology for validating, simulating, and implementing mission orders for unmanned systems. First, mission orders are represented in ontologies that are understandable by humans and readable by machines. Next, the MEO is validated and tested for logical coherence using Semantic Web standards. The validated MEO is refined for implementation in simulation and visualization. This process is iterated until the MEO is ready for implementation. This methodology is applied to four Naval scenarios in order of increasing challenges that the operational environment and the adversary impose on the Human-Machine Team. The extent of challenge to Ethical Control in the scenarios is used to refine the MEO for the unmanned system. The research also considers Data-Centric Security and blockchain distributed ledger as enabling technologies for Ethical Control. Data-Centric Security is a combination of structured messaging, efficient compression, digital signature, and document encryption, in correct order, for round-trip messaging. Blockchain distributed ledger has potential to further add integrity measures for aggregated message sets, confirming receipt/response/sequencing without undetected message loss. When implemented, these technologies together form the end-to-end data security that ensures mutual trust and command authority in real-world operational environments—despite the potential presence of interfering network conditions, intermittent gaps, or potential opponent intercept. A coherent Ethical Control approach to command and control of unmanned systems is thus feasible. Therefore, this research concludes that maintaining human control of unmanned systems at long ranges of time-duration and distance, in denied, degraded, and deceptive environments, is possible through well-defined mission orders and data security technologies. Finally, as the human role remains essential in Ethical Control of unmanned systems, this research recommends the development of an unmanned system qualification process for Naval operations, as well as additional research prioritized based on urgency and impact.Raytheon Missiles & DefenseRaytheon Missiles & Defense (RMD).Approved for public release; distribution is unlimited

    Adaptive Computing Systems for Aerospace

    Get PDF
    RÉSUMÉ En raison de leur complexité croissante, les systèmes informatiques modernes nécessitent de nouvelles méthodologies permettant d’automatiser leur conception et d’améliorer leurs performances. L’espace, en particulier, constitue un environnement très défavorable au maintien de la performance de ces systèmes : sans protection des rayonnements ionisants et des particules, l’électronique basée sur CMOS peut subir des erreurs transitoires, une dégradation des performances et une usure accélérée causant ultimement une défaillance du système. Les approches traditionnellement adoptees pour garantir la fiabilité du système et prolonger sa durée de vie sont basées sur la redondance, généralement établie durant la conception. En revanche, ces solutions sont coûteuses et parfois inefficaces, puisqu'elles augmentent la taille et la complexité du système, l'exposant à des risques plus élevés de surchauffe et d'erreurs. Les conséquences de ces limites sont d'autant plus importantes lorsqu'elles s’appliquent aux systèmes critiques (e.g., contraintes par le temps ou dont l’accès est limité) qui doivent être en mesure de prendre des décisions sans intervention humaine. Sur la base de ces besoins et limites, le développement en aérospatial de systèmes informatiques avec capacités adaptatives peut être considéré comme la solution la plus appropriée pour les dispositifs intégrés à haute performance. L’informatique auto-adaptative offre un potentiel sans égal pour assurer la création d’une génération d’ordinateurs plus intelligents et fiables. Qui plus est, elle répond aux besoins modernes de concevoir et programmer des systèmes informatiques capables de répondre à des objectifs en conflit. En nous inspirant des domaines de l’intelligence artificielle et des systèmes reconfigurables, nous aspirons à développer des systèmes informatiques auto-adaptatifs pour l’aérospatiale qui répondent aux enjeux et besoins actuels. Notre objectif est d’améliorer l’efficacité de ces systèmes, leur tolerance aux pannes et leur capacité de calcul. Afin d’atteindre cet objectif, une analyse expérimentale et comparative des algorithmes les plus populaires pour l’exploration multi-objectifs de l’espace de conception est d’abord effectuée. Les algorithmes ont été recueillis suite à une revue de la plus récente littérature et comprennent des méthodes heuristiques, évolutives et statistiques. L’analyse et la comparaison de ceux-ci permettent de cerner les forces et limites de chacun et d'ainsi définir des lignes directrices favorisant un choix optimal d’algorithmes d’exploration. Pour la création d’un système d’optimisation autonome—permettant le compromis entre plusieurs objectifs—nous exploitons les capacités des modèles graphiques probabilistes. Nous introduisons une méthodologie basée sur les modèles de Markov cachés dynamiques, laquelle permet d’équilibrer la disponibilité et la durée de vie d’un système multiprocesseur. Ceci est obtenu en estimant l'occurrence des erreurs permanentes parmi les erreurs transitoires et en migrant dynamiquement le calcul sur les ressources supplémentaires en cas de défaillance. La nature dynamique du modèle rend celui-ci adaptable à différents profils de mission et taux d’erreur. Les résultats montrent que nous sommes en mesure de prolonger la durée de vie du système tout en conservant une disponibilité proche du cas idéal. En raison des contraintes de temps rigoureuses imposées par les systèmes aérospatiaux, nous étudions aussi l’optimisation de la tolérance aux pannes en présence d'exigences d’exécution en temps réel. Nous proposons une méthodologie pour améliorer la fiabilité du calcul en présence d’erreurs transitoires pour les tâches en temps réel d’un système multiprocesseur homogène avec des capacités de réglage de tension et de fréquence. Dans ce cadre, nous définissons un nouveau compromis probabiliste entre la consommation d’énergie et la tolérance aux erreurs. Comme nous reconnaissons que la résilience est une propriété d’intérêt omniprésente (par exemple, pour la conception et l’analyse de systems complexes génériques), nous adaptons une définition formelle de celle-ci à un cadre probabiliste dérivé à nouveau de modèles de Markov cachés. Ce cadre nous permet de modéliser de façon réaliste l’évolution stochastique et l’observabilité partielle des phénomènes du monde réel. Nous proposons un algorithme permettant le calcul exact efficace de l’étape essentielle d’inférence laquelle est requise pour vérifier des propriétés génériques. Pour démontrer la flexibilité de cette approche, nous la validons, entre autres, dans le contexte d’un système informatisé reconfigurable pour l’aérospatiale. Enfin, nous étendons la portée de nos recherches vers la robotique et les systèmes multi-agents, deux sujets dont la popularité est croissante en exploration spatiale. Nous abordons le problème de l’évaluation et de l’entretien de la connectivité dans le context distribué et auto-adaptatif de la robotique en essaim. Nous examinons les limites des solutions existantes et proposons une nouvelle méthodologie pour créer des géométries complexes connectées gérant plusieurs tâches simultanément. Des contributions additionnelles dans plusieurs domaines sont résumés dans les annexes, nommément : (i) la conception de CubeSats, (ii) la modélisation des rayonnements spatiaux pour l’injection d’erreur dans FPGA et (iii) l’analyse temporelle probabiliste pour les systèmes en temps réel. À notre avis, cette recherche constitue un tremplin utile vers la création d’une nouvelle génération de systèmes informatiques qui exécutent leurs tâches d’une façon autonome et fiable, favorisant une exploration spatiale plus simple et moins coûteuse.----------ABSTRACT Today's computer systems are growing more and more complex at a pace that requires the development of novel and more effective methodologies to automate their design. Space, in particular, represents a challenging environment: without protection from ionizing and particle radiation, CMOS-based electronics are subject to transients faults, performance degradation, accelerated wear, and, ultimately, system failure. Traditional approaches adopted to guarantee reliability and extended lifetime are based on redundancy that is established at design-time. These solutions are expensive and sometimes inefficient, as they increase the complexity and size of a system, exposing it to higher risks of overheating and incurring in radiation-induced errors. Moreover, critical systems---e.g., time-constrained ones and those where access is limited---must be able to cope with pivotal situations without relying on human intervention. Hence, the emerging interest in computer systems with adaptive capabilities as the most suitable solution for novel high-performance embedded devices for aerospace. Self-adaptive computing carries unmatched potential and great promises for the creation of a new generation of smart, more reliable computers, and it addresses the challenge of designing and programming modern and future computer systems that must meet conflicting goals. Drawing from the fields of artificial intelligence and reconfigurable systems, we aim at developing self-adaptive computer systems for aerospace. Our goal is to improve their efficiency, fault-tolerance, and computational capabilities. The first step in this research is the experimental analysis of the most popular multi-objective design-space exploration algorithms for high-level design. These algorithms were collected from the recent literature and include heuristic, evolutionary, and statistical methods. Their comparison provides insights that we use to define guidelines for the choice of the most appropriate optimization algorithms, given the features of the design space. For the creation of a self-managing optimization framework---enabling the adaptive trade-off of multiple objectives---we leverage the tools of probabilistic graphical models. We introduce a mechanism based on dynamic hidden Markov models that balances the availability and lifetime of multiprocessor systems. This is achieved by estimating the occurrence of permanent faults amid transient faults, and by dynamically migrating the computation on excess resources, when failure occurs. The dynamic nature of the model makes it adjustable to different mission profiles and fault rates. The results show that we are able to lead systems to extended lifetimes, while keeping their availability close to ideal. On account of the stringent timing constraints imposed by aerospace systems, we then investigate the optimization of fault-tolerance under real-time requirements. We propose a methodology to improve the reliability of computation in the presence of transient errors when considering the mapping of real-time tasks on a homogeneous multiprocessor system with voltage and frequency scaling capabilities. In this framework, we take advantage of probability theory to define a novel trade-off between power consumption and fault-tolerance. As we recognize that resilience is a pervasive property of interest (e.g., for the design and analysis of generic complex systems), we adapt a formal definition of it to one more probabilistic framework derived from hidden Markov models. This allows us to realistically model the stochastic evolution and partial observability of complex real-world environments. Within this framework, we propose an efficient algorithm for the exact computation of the essential inference step required to construct generic property checking. To demonstrate the flexibility of this approach, we validate it in the context, among others, of a self-aware, reconfigurable computing system for aerospace. Finally, we move the scope of our research towards robotics and multi-agent systems: a topic of thriving popularity for space exploration. We tackle the problem of connectivity assessment and maintenance in the distributed and self-adaptive context of swarm robotics. We review the limitations of existing solutions and propose a novel methodology to create connected complex geometries for multiple task coverage. Additional contributions in the areas of (i) CubeSat design, (ii) the modelling of space radiation for FPGA fault-injection, and (iii) probabilistic timing analysis for real-time systems are summarized in the appendices. In the author's opinion, this research provides a number of useful stepping stones for the creation of a new generation of computing systems that autonomously---and reliably---perform their tasks for longer periods of time, fostering simpler and cheaper space exploration
    • …
    corecore