145 research outputs found

    Robotic Olfactory-Based Navigation with Mobile Robots

    Get PDF
    Robotic odor source localization (OSL) is a technology that enables mobile robots or autonomous vehicles to find an odor source in unknown environments. It has been viewed as challenging due to the turbulent nature of airflows and the resulting odor plume characteristics. The key to correctly finding an odor source is designing an effective olfactory-based navigation algorithm, which guides the robot to detect emitted odor plumes as cues in finding the source. This dissertation proposes three kinds of olfactory-based navigation methods to improve search efficiency while maintaining a low computational cost, incorporating different machine learning and artificial intelligence methods. A. Adaptive Bio-inspired Navigation via Fuzzy Inference Systems. In nature, animals use olfaction to perform many life-essential activities, such as homing, foraging, mate-seeking, and evading predators. Inspired by the mate-seeking behaviors of male moths, this method presents a behavior-based navigation algorithm for using on a mobile robot to locate an odor source. Unlike traditional bio-inspired methods, which use fixed parameters to formulate robot search trajectories, a fuzzy inference system is designed to perceive the environment and adjust trajectory parameters based on the current search situation. The robot can automatically adapt the scale of search trajectories to fit environmental changes and balance the exploration and exploitation of the search. B. Olfactory-based Navigation via Model-based Reinforcement Learning Methods. This method analogizes the odor source localization as a reinforcement learning problem. During the odor plume tracing process, the belief state in a partially observable Markov decision process model is adapted to generate a source probability map that estimates possible odor source locations. A hidden Markov model is employed to produce a plume distribution map that premises plume propagation areas. Both source and plume estimates are fed to the robot. A decision-making model based on a fuzzy inference system is designed to dynamically fuse information from two maps and balance the exploitation and exploration of the search. After assigning the fused information to reward functions, a value iteration-based path planning algorithm solves the optimal action policy. C. Robotic Odor Source Localization via Deep Learning-based Methods. This method investigates the viability of implementing deep learning algorithms to solve the odor source localization problem. The primary objective is to obtain a deep learning model that guides a mobile robot to find an odor source without explicating search strategies. To achieve this goal, two kinds of deep learning models, including adaptive neuro-fuzzy inference system (ANFIS) and deep neural networks (DNNs), are employed to generate the olfactory-based navigation strategies. Multiple training data sets are acquired by applying two traditional methods in both simulation and on-vehicle tests to train deep learning models. After the supervised training, the deep learning models are verified with unseen search situations in simulation and real-world environments. All proposed algorithms are implemented in simulation and on-vehicle tests to verify their effectiveness. Compared to traditional methods, experiment results show that the proposed algorithms outperform them in terms of the success rate and average search time. Finally, the future research directions are presented at the end of the dissertation

    Machine-Insect Interface: Spatial Navigation of a Mobile Robot by a Drosophila

    Get PDF
    Machine-insect interfaces have been studied in detail in the past few decades. Animal-machine interfaces have been developed in various ways. In our study, we develop a machine-insect interface wherein an untethered fruit fly (Drosophila melanogaster) is tracked to remotely control a mobile robot. We develop the Active Omni-directional Treadmill (AOT) model, and integrate into the mobile robot to create the interface between the robot and the fruit fly. In this system, a fruit fly is allowed to walk on top of a transparent ball. As the fly tries to walk on the ball, we track the position of the fly using the dark field imaging technique. The displacement of the fly will be balanced out by a counter-displacement of the transparent ball, which is actuated by the omni-directional wheels, to keep the fly at the same position on the ball. Then the mobile robot spatially navigates based on the fly movements. The Robotic Operating System (ROS) is used to interface between the ball tracker and the mobile robot wirelessly. This study will help in investigating the fly’s behavior under different situations such as its response to a physical or virtual stimulus. The future scope of this project will include imaging the brain activity on the Drosophila as it spatially navigates towards a stimulus

    Macroscopic Modeling of Aggregation Experiments using Embodied Agents in Teams of Constant and Time-Varying Sizes

    Get PDF
    • …
    corecore