1,052 research outputs found

    Robotic Olfactory-Based Navigation with Mobile Robots

    Get PDF
    Robotic odor source localization (OSL) is a technology that enables mobile robots or autonomous vehicles to find an odor source in unknown environments. It has been viewed as challenging due to the turbulent nature of airflows and the resulting odor plume characteristics. The key to correctly finding an odor source is designing an effective olfactory-based navigation algorithm, which guides the robot to detect emitted odor plumes as cues in finding the source. This dissertation proposes three kinds of olfactory-based navigation methods to improve search efficiency while maintaining a low computational cost, incorporating different machine learning and artificial intelligence methods. A. Adaptive Bio-inspired Navigation via Fuzzy Inference Systems. In nature, animals use olfaction to perform many life-essential activities, such as homing, foraging, mate-seeking, and evading predators. Inspired by the mate-seeking behaviors of male moths, this method presents a behavior-based navigation algorithm for using on a mobile robot to locate an odor source. Unlike traditional bio-inspired methods, which use fixed parameters to formulate robot search trajectories, a fuzzy inference system is designed to perceive the environment and adjust trajectory parameters based on the current search situation. The robot can automatically adapt the scale of search trajectories to fit environmental changes and balance the exploration and exploitation of the search. B. Olfactory-based Navigation via Model-based Reinforcement Learning Methods. This method analogizes the odor source localization as a reinforcement learning problem. During the odor plume tracing process, the belief state in a partially observable Markov decision process model is adapted to generate a source probability map that estimates possible odor source locations. A hidden Markov model is employed to produce a plume distribution map that premises plume propagation areas. Both source and plume estimates are fed to the robot. A decision-making model based on a fuzzy inference system is designed to dynamically fuse information from two maps and balance the exploitation and exploration of the search. After assigning the fused information to reward functions, a value iteration-based path planning algorithm solves the optimal action policy. C. Robotic Odor Source Localization via Deep Learning-based Methods. This method investigates the viability of implementing deep learning algorithms to solve the odor source localization problem. The primary objective is to obtain a deep learning model that guides a mobile robot to find an odor source without explicating search strategies. To achieve this goal, two kinds of deep learning models, including adaptive neuro-fuzzy inference system (ANFIS) and deep neural networks (DNNs), are employed to generate the olfactory-based navigation strategies. Multiple training data sets are acquired by applying two traditional methods in both simulation and on-vehicle tests to train deep learning models. After the supervised training, the deep learning models are verified with unseen search situations in simulation and real-world environments. All proposed algorithms are implemented in simulation and on-vehicle tests to verify their effectiveness. Compared to traditional methods, experiment results show that the proposed algorithms outperform them in terms of the success rate and average search time. Finally, the future research directions are presented at the end of the dissertation

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving

    Full text link
    Tactical decision making for autonomous driving is challenging due to the diversity of environments, the uncertainty in the sensor information, and the complex interaction with other road users. This paper introduces a general framework for tactical decision making, which combines the concepts of planning and learning, in the form of Monte Carlo tree search and deep reinforcement learning. The method is based on the AlphaGo Zero algorithm, which is extended to a domain with a continuous state space where self-play cannot be used. The framework is applied to two different highway driving cases in a simulated environment and it is shown to perform better than a commonly used baseline method. The strength of combining planning and learning is also illustrated by a comparison to using the Monte Carlo tree search or the neural network policy separately

    Towards autonomous adaptive behavior in a bioinspired CNN-controlled robot

    Get PDF
    Abstract-This paper describes a general approach for the unsupervised learning of behaviors in a behavior-based robot. The key idea is to formalize a behavior produced by a Motor Map driven by an adaptive reward function. Aim of the adaptive reward function is to select the most significant sensory inputs and to use them in the best way. The greatest challenge is to keep small the search space. Motor Map learning relies on the classical Kohonen algorithm, while the structure of the reward function is learnt through a non-associative reinforcement learning algorithm. Simulation results on a six legged biologically-inspired robot confirm the suitability of the approach. This methodology allows the human designer to easily embody all the a priori knowledge on the robot controller, while providing at the same time a high degree of adaptability and robustness against the sensory malfunctioning

    Intelligent Haptic Perception for Physical Robot Interaction

    Get PDF
    Doctorado en Ingeniería mecatrónica. Fecha de entrega de la Tesis doctoral: 8 de enero de 2020. Fecha de lectura de Tesis doctoral: 30 de marzo 2020.The dream of having robots living among us is coming true thanks to the recent advances in Artificial Intelligence (AI). The gap that still exists between that dream and reality will be filled by scientific research, but manifold challenges are yet to be addressed. Handling the complexity and uncertainty of real-world scenarios is still the major challenge in robotics nowadays. In this respect, novel AI methods are giving the robots the capability to learn from experience and therefore to cope with real-life situations. Moreover, we live in a physical world in which physical interactions are both vital and natural. Thus, those robots that are being developed to live among humans must perform tasks that require physical interactions. Haptic perception, conceived as the idea of feeling and processing tactile and kinesthetic sensations, is essential for making this physical interaction possible. This research is inspired by the dream of having robots among us, and therefore, addresses the challenge of developing robots with haptic perception capabilities that can operate in real-world scenarios. This PhD thesis tackles the problems related to physical robot interaction by employing machine learning techniques. Three AI solutions are proposed for different physical robot interaction challenges: i) Grasping and manipulation of humans’ limbs; ii) Tactile object recognition; iii) Control of Variable-Stiffness-Link (VSL) manipulators. The ideas behind this research work have potential robotic applications such as search and rescue, healthcare or rehabilitation. This dissertation consists of a compendium of publications comprising as the main body a compilation of previously published scientific articles. The baseline of this research is composed of a total of five papers published in prestigious peer-reviewed scientific journals and international robotics conferences

    Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications

    Hexapod locomotion : a nonlinear dynamical systems approach

    Get PDF
    The ability of walking in a wide variety of terrains is one of the most important features of hexapod insects. In this paper we describe a bio-inspired controller able to generate locomotion and switch between different type of gaits for an hexapod robot. Motor patterns are generated by coupled Central Pattern Generators formulated as nonlinear oscillators. These patterns are modulated by a drive signal, proportionally changing the oscillators frequency, amplitude and the coupling parameters among the oscillators. Locomotion initiation, stopping and smooth gait switching is achieved by changing the drive signal. We also demonstrate a posture controller for hexapod robots using the dynamical systems approach. Results from simulation using a model of the Chiara hexapod robot demonstrate the capability of the controller both to locomotion generation and smooth gait transition. The postural controller is also tested in different situations in which the hexapod robot is expected to maintain balance. The presented results prove its reliability

    Adaptive low-level control of autonomous underwater vehicles using deep reinforcement learning

    Get PDF
    Low-level control of autonomous underwater vehicles (AUVs) has been extensively addressed by classical control techniques. However, the variable operating conditions and hostile environments faced by AUVs have driven researchers towards the formulation of adaptive control approaches. The reinforcement learning (RL) paradigm is a powerful framework which has been applied in different formulations of adaptive control strategies for AUVs. However, the limitations of RL approaches have lead towards the emergence of deep reinforcement learning which has become an attractive and promising framework for developing real adaptive control strategies to solve complex control problems for autonomous systems. However, most of the existing applications of deep RL use video images to train the decision making artificial agent but obtaining camera images only for an AUV control purpose could be costly in terms of energy consumption. Moreover, the rewards are not easily obtained directly from the video frames. In this work we develop a deep RL framework for adaptive control applications of AUVs based on an actor-critic goal-oriented deep RL architecture, which takes the available raw sensory information as input and as output the continuous control actions which are the low-level commands for the AUV's thrusters. Experiments on a real AUV demonstrate the applicability of the stated deep RL approach for an autonomous robot control problem.Fil: Carlucho, Ignacio. Universidad Nacional del Centro de la Provincia de Buenos Aires. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires; ArgentinaFil: de Paula, Mariano. Universidad Nacional del Centro de la Provincia de Buenos Aires. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires; ArgentinaFil: Wang, Sen. Heriot-Watt University; Reino UnidoFil: Petillot, Yvan. Heriot-Watt University; Reino UnidoFil: Acosta, Gerardo Gabriel. Universidad Nacional del Centro de la Provincia de Buenos Aires. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires; Argentin

    Computational Approaches to Explainable Artificial Intelligence:Advances in Theory, Applications and Trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications
    corecore