64 research outputs found

    An Incrementally Deployed Swarm of MAVs for Localization UsingUltra-Wideband

    Get PDF
    Knowing the position of a moving target can be crucial, for example when localizing a first responder in an emergency scenario. In recent years, ultra wideband (UWB) has gained a lot of attention due to its localization accuracy. Unfortunately, UWB solutions often demand a manual setup in advance. This is tedious at best and not possible at all in environments with access restrictions (e.g., collapsed buildings). Thus, we propose a solution combining UWB with micro air vehicles (MAVs) to allow for UWB localization in a priori inaccessible environments. More precisely, MAVs equipped with UWB sensors are deployed incrementally into the environment. They localize themselves based on previously deployed MAVs and on-board odometry, before they land and enhance the UWB mesh network themselves. We tested this solution in a lab environment using a motion capture system for ground truth. Four MAVs were deployed as anchors and a fifth MAV was localized for over 80 second at a root mean square (RMS) of 0.206 m averaged over five experiments. For comparison, a setup with ideal anchor position knowledge came with 20 % lower RMS, and a setup purely based on odometry with 81 % higher RMS. The absolute scale of the error with the proposed approach is expected to be low enough for applications envisioned within the scope of this paper (e.g., the localization of a first responder) and thus considered a step towards flexible and accurate localization in a priori inaccessible, GNSS-denied environments.acceptedVersio

    Environment Search Planning Subject to High Robot Localization Uncertainty

    Get PDF
    As robots find applications in more complex roles, ranging from search and rescue to healthcare and services, they must be robust to greater levels of localization uncertainty and uncertainty about their environments. Without consideration for such uncertainties, robots will not be able to compensate accordingly, potentially leading to mission failure or injury to bystanders. This work addresses the task of searching a 2D area while reducing localization uncertainty. Wherein, the environment provides low uncertainty pose updates from beacons with a short range, covering only part of the environment. Otherwise the robot localizes using dead reckoning, relying on wheel encoder and yaw rate information from a gyroscope. As such, outside of the regions with position updates, there will be unconstrained localization error growth over time. The work contributes a Belief Markov Decision Process formulation for solving the search problem and evaluates the performance using Partially Observable Monte Carlo Planning (POMCP). Additionally, the work contributes an approximate Markov Decision Process formulation and reduced complexity state representation. The approximate problem is evaluated using value iteration. To provide a baseline, the Google OR-Tools package is used to solve the travelling salesman problem (TSP). Results are verified by simulating a differential drive robot in the Gazebo simulation environment. POMCP results indicate planning can be tuned to prioritize constraining uncertainty at the cost of increasing path length. The MDP formulation provides consistently lower uncertainty with minimal increases in path length over the TSP solution. Both formulations show improved coverage outcomes

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    Autonomous Navigation of Distributed Spacecraft using Graph-based SLAM for Proximity Operations in Small Celestial Bodies

    Full text link
    Establishment of a sustainable human presence beyond the cislunar space is a major milestone for mankind. Small celestial bodies (SCBs) like asteroids are known to contain valuable natural resources necessary for the development of space assets essential to the accomplishment of this goal. Consequently, future robotic spacecraft missions to SCBs are envisioned with the objective of commercial in-situ resource utilization (ISRU). In mission design, there is also an increasing interest in the utilization of the distributed spacecraft, to benefit from specialization and redundancy. The ability of distributed spacecraft to navigate autonomously in the proximity of a SCB is indispensable for the successful realization of ISRU mission objectives. Quasi-autonomous methods currently used for proximity navigation require extensive ground support for mapping and model development, which can be an impediment for large scale multi-spacecraft ISRU missions in the future. It is prudent to leverage the advances in terrestrial robotic navigation to investigate the development of novel methods for autonomous navigation of spacecraft. The primary objective of the work presented in this thesis is to evaluate the feasibility and investigate the development of methods based on graph-based simultaneous localization and mapping (SLAM), a popular algorithm used in terrestrial autonomous navigation, for the autonomous navigation of distributed spacecraft in the proximity of SCBs. To this end, recent research in graph-based SLAM is extensively studied to identify strategies used to enable multi-agent navigation. The spacecraft navigation requirement is formulated as a graph-based SLAM problem using metric GraphSLAM or topometric graph-based SLAM. Techniques developed based on the identified strategies namely, map merging, inter-spacecraft measurements and relative localization are then applied to this formulation to enable distributed spacecraft navigation. In each case, navigation is formulated in terms of its application to a proximity operation scenario that best suits the multi-agent navigation technique. Several challenges related to the application of graph-based SLAM for spacecraft navigation, such as computational cost and illumination variation are also identified and addressed in the development of these methods. Experiments are performed using simulated models of asteroids and spacecraft dynamics, comparing the estimated states of the spacecraft and landmarks to the assumed true states. The results from the experiments indicate a consistent and robust state determination process, suggesting the suitability of the application of multi-agent navigation techniques to graph-based SLAM for enabling the autonomous navigation of distributed spacecraft near SCBs

    Advances in Robot Navigation

    Get PDF
    Robot navigation includes different interrelated activities such as perception - obtaining and interpreting sensory information; exploration - the strategy that guides the robot to select the next direction to go; mapping - the construction of a spatial representation by using the sensory information perceived; localization - the strategy to estimate the robot position within the spatial map; path planning - the strategy to find a path towards a goal location being optimal or not; and path execution, where motor actions are determined and adapted to environmental changes. This book integrates results from the research work of authors all over the world, addressing the abovementioned activities and analyzing the critical implications of dealing with dynamic environments. Different solutions providing adaptive navigation are taken from nature inspiration, and diverse applications are described in the context of an important field of study: social robotics

    Comparison of state marginalization techniques in visual inertial navigation filters

    Get PDF
    The main focus of this thesis is finding and validating an efficient visual inertial navigation system (VINS) algorithm for applications in micro aerial vehicles (MAV). A typical VINS for a MAV consists of a low-cost micro electro mechanical system (MEMS) inertial measurement unit (IMU) and a monocular camera, which provides a minimum payload sensor setup. This setup is highly desirable for navigation of MAVs because highly resource constrains in the platform. However, bias and noise of lowcost IMUs demand sufficiently accurate VINS algorithms. Accurate VINS algorithms has been developed over the past decade but they demand higher computational resources. Therefore, resource limited MAVs demand computationally efficient VINS algorithms. This thesis considers the following computational cost elements in the VINS algorithm: feature tracking front-end, state marginalization technique and the complexity of the algorithm formulation. In this thesis three state-of-the-art feature tracking front ends were compared in terms of accuracy. (VINS-Mono front-end, MSCKF-Mono feature tracker and Matlab based feature tracker). Four state-ofthe- art state marginalization techniques (MSCKF-Generic marginalization, MSCKFMono marginalization, MSCKF-Two way marginalization and Two keyframe based epipolar constraint marginalization) were compared in terms of accuracy and efficiency. The complexity of the VINS algorithm formulation has also been compared using the filter execution time. The research study then presents the comparative analysis of the algorithms using a publicly available MAV benchmark datasets. Based on the results, an efficient VINS algorithm is proposed which is suitable for MAVs

    Simultaneous localization and mapping for inspection robots in water and sewer pipe networks: a review

    Get PDF
    At the present time, water and sewer pipe networks are predominantly inspected manually. In the near future, smart cities will perform intelligent autonomous monitoring of buried pipe networks, using teams of small robots. These robots, equipped with all necessary computational facilities and sensors (optical, acoustic, inertial, thermal, pressure and others) will be able to inspect pipes whilst navigating, selflocalising and communicating information about the pipe condition and faults such as leaks or blockages to human operators for monitoring and decision support. The predominantly manual inspection of pipe networks will be replaced with teams of autonomous inspection robots that can operate for long periods of time over a large spatial scale. Reliable autonomous navigation and reporting of faults at this scale requires effective localization and mapping, which is the estimation of the robot’s position and its surrounding environment. This survey presents an overview of state-of-the-art works on robot simultaneous localization and mapping (SLAM) with a focus on water and sewer pipe networks. It considers various aspects of the SLAM problem in pipes, from the motivation, to the water industry requirements, modern SLAM methods, map-types and sensors suited to pipes. Future challenges such as robustness for long term robot operation in pipes are discussed, including how making use of prior knowledge, e.g. geographic information systems (GIS) can be used to build map estimates, and improve the multi-robot SLAM in the pipe environmen

    Active SLAM: A Review On Last Decade

    Full text link
    This article presents a comprehensive review of the Active Simultaneous Localization and Mapping (A-SLAM) research conducted over the past decade. It explores the formulation, applications, and methodologies employed in A-SLAM, particularly in trajectory generation and control-action selection, drawing on concepts from Information Theory (IT) and the Theory of Optimal Experimental Design (TOED). This review includes both qualitative and quantitative analyses of various approaches, deployment scenarios, configurations, path-planning methods, and utility functions within A-SLAM research. Furthermore, this article introduces a novel analysis of Active Collaborative SLAM (AC-SLAM), focusing on collaborative aspects within SLAM systems. It includes a thorough examination of collaborative parameters and approaches, supported by both qualitative and statistical assessments. This study also identifies limitations in the existing literature and suggests potential avenues for future research. This survey serves as a valuable resource for researchers seeking insights into A-SLAM methods and techniques, offering a current overview of A-SLAM formulation.Comment: 34 pages, 8 figures, 6 table

    Single and multiple stereo view navigation for planetary rovers

    Get PDF
    © Cranfield UniversityThis thesis deals with the challenge of autonomous navigation of the ExoMars rover. The absence of global positioning systems (GPS) in space, added to the limitations of wheel odometry makes autonomous navigation based on these two techniques - as done in the literature - an inviable solution and necessitates the use of other approaches. That, among other reasons, motivates this work to use solely visual data to solve the robot’s Egomotion problem. The homogeneity of Mars’ terrain makes the robustness of the low level image processing technique a critical requirement. In the first part of the thesis, novel solutions are presented to tackle this specific problem. Detection of robust features against illumination changes and unique matching and association of features is a sought after capability. A solution for robustness of features against illumination variation is proposed combining Harris corner detection together with moment image representation. Whereas the first provides a technique for efficient feature detection, the moment images add the necessary brightness invariance. Moreover, a bucketing strategy is used to guarantee that features are homogeneously distributed within the images. Then, the addition of local feature descriptors guarantees the unique identification of image cues. In the second part, reliable and precise motion estimation for the Mars’s robot is studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous Localisation And Mapping (VSLAM) is investigated, proposing enhancements and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation techniques are explored. Alternative photogrammetry reprojection concepts are tested. Lastly, data fusion techniques are proposed to deal with the integration of multiple stereo view data. Our robust visual scheme allows good feature repeatability. Because of this, dimensionality reduction of the feature data can be used without compromising the overall performance of the proposed solutions for motion estimation. Also, the developed Egomotion techniques have been extensively validated using both simulated and real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot motion estimation are introduced, presenting interesting benefits. The obtained results prove the innovative methods presented here to be accurate and reliable approaches capable to solve the Egomotion problem in a Mars environment
    • …
    corecore