44 research outputs found
Safe Local Exploration for Replanning in Cluttered Unknown Environments for Micro-Aerial Vehicles
In order to enable Micro-Aerial Vehicles (MAVs) to assist in complex,
unknown, unstructured environments, they must be able to navigate with
guaranteed safety, even when faced with a cluttered environment they have no
prior knowledge of. While trajectory optimization-based local planners have
been shown to perform well in these cases, prior work either does not address
how to deal with local minima in the optimization problem, or solves it by
using an optimistic global planner.
We present a conservative trajectory optimization-based local planner,
coupled with a local exploration strategy that selects intermediate goals. We
perform extensive simulations to show that this system performs better than the
standard approach of using an optimistic global planner, and also outperforms
doing a single exploration step when the local planner is stuck. The method is
validated through experiments in a variety of highly cluttered environments
including a dense forest. These experiments show the complete system running in
real time fully onboard an MAV, mapping and replanning at 4 Hz.Comment: Accepted to ICRA 2018 and RA-L 201
The simultaneous localization and mapping (SLAM):An overview
Positioning is a need for many applications related to mapping and navigation either in civilian or military domains. The significant developments in satellite-based techniques, sensors, telecommunications, computer hardware and software, image processing, etc. positively influenced to solve the positioning problem efficiently and instantaneously. Accordingly, the mentioned development empowered the applications and advancement of autonomous navigation. One of the most interesting developed positioning techniques is what is called in robotics as the Simultaneous Localization and Mapping SLAM. The SLAM problem solution has witnessed a quick improvement in the last decades either using active sensors like the RAdio Detection And Ranging (Radar) and Light Detection and Ranging (LiDAR) or passive sensors like cameras. Definitely, positioning and mapping is one of the main tasks for Geomatics engineers, and therefore it's of high importance for them to understand the SLAM topic which is not easy because of the huge documentation and algorithms available and the various SLAM solutions in terms of the mathematical models, complexity, the sensors used, and the type of applications. In this paper, a clear and simplified explanation is introduced about SLAM from a Geomatical viewpoint avoiding going into the complicated algorithmic details behind the presented techniques. In this way, a general overview of SLAM is presented showing the relationship between its different components and stages like the core part of the front-end and back-end and their relation to the SLAM paradigm. Furthermore, we explain the major mathematical techniques of filtering and pose graph optimization either using visual or LiDAR SLAM and introduce a summary of the deep learning efficient contribution to the SLAM problem. Finally, we address examples of some existing practical applications of SLAM in our reality
Skyline matching: absolute localisation for planetary exploration rovers
Skyline matching is a technique for absolute localisation framed in the category of autonomous long-range exploration. Absolute localisation becomes crucial for planetary exploration to recalibrate position during long traverses or to estimate position with no a-priori information. In this project, a skyline matching algorithm is proposed, implemented and evaluated using real acquisitions and simulated data. The function is based on comparing the skyline extracted from rover images and orbital data. The results are promising but intensive testing on more real data is needed to further characterize the algorithm
Aerial Field Robotics
Aerial field robotics research represents the domain of study that aims to
equip unmanned aerial vehicles - and as it pertains to this chapter,
specifically Micro Aerial Vehicles (MAVs)- with the ability to operate in
real-life environments that present challenges to safe navigation. We present
the key elements of autonomy for MAVs that are resilient to collisions and
sensing degradation, while operating under constrained computational resources.
We overview aspects of the state of the art, outline bottlenecks to resilient
navigation autonomy, and overview the field-readiness of MAVs. We conclude with
notable contributions and discuss considerations for future research that are
essential for resilience in aerial robotics.Comment: Accepted in the Encyclopedia of Robotics, Springe
Semantically-enhanced Deep Collision Prediction for Autonomous Navigation using Aerial Robots
This paper contributes a novel and modularized learning-based method for
aerial robots navigating cluttered environments containing hard-to-perceive
thin obstacles without assuming access to a map or the full pose estimation of
the robot. The proposed solution builds upon a semantically-enhanced
Variational Autoencoder that is trained with both real-world and simulated
depth images to compress the input data, while preserving semantically-labeled
thin obstacles and handling invalid pixels in the depth sensor's output. This
compressed representation, in addition to the robot's partial state involving
its linear/angular velocities and its attitude are then utilized to train an
uncertainty-aware 3D Collision Prediction Network in simulation to predict
collision scores for candidate action sequences in a predefined motion
primitives library. A set of simulation and experimental studies in cluttered
environments with various sizes and types of obstacles, including multiple
hard-to-perceive thin objects, were conducted to evaluate the performance of
the proposed method and compare against an end-to-end trained baseline. The
results demonstrate the benefits of the proposed semantically-enhanced deep
collision prediction for learning-based autonomous navigation.Comment: 8 Pages, 8 figures. Accepted to the IEEE/RSJ International Conference
on Intelligent Robots and Systems 202
Neural Network based Robot 3D Mapping and Navigation using Depth Image Camera
Robotics research has been developing rapidly in the past decade. However, in order to bring robots into household or office environments and cooperate well with humans, it is still required more research works. One of the main problems is robot localization and navigation. To be able to accomplish its missions, the mobile robot needs to solve problems of localizing itself in the environment, finding the best path and navigate to the goal. The navigation methods can be categorized into map-based navigation and map-less navigation. In this research we propose a method based on neural networks, using a depth image camera to solve the robot navigation problem. By using a depth image camera, the surrounding environment can be recognized regardless of the lighting conditions. A neural network-based approach is fast enough for robot navigation in real-time which is important to develop the full autonomous robots.In our method, mapping and annotating of the surrounding environment is done by the robot using a Feed-Forward Neural Network and a CNN network. The 3D map not only contains the geometric information of the environments but also their semantic contents. The semantic contents are important for robots to accomplish their tasks. For instance, consider the task “Go to cabinet to take a medicine”. The robot needs to know the position of the cabinet and medicine which is not supplied by solely the geometrical map. A Feed-Forward Neural Network is trained to convert the depth information from depth images into 3D points in real-world coordination. A CNN network is trained to segment the image into classes. By combining the two neural networks, the objects in the environment are segmented and their positions are determined.We implemented the proposed method using the mobile humanoid robot. Initially, the robot moves in the environment and build the 3D map with objects placed in their positions. Then, the robot utilizes the developed 3D map for goal-directed navigation.The experimental results show good performance in terms of the 3D map accuracy and robot navigation. Most of the objects in the working environments are classified by the trained CNN. Un-recognized objects are classified by Feed-Forward Neural Network. As a result, the generated maps reflected exactly working environments and can be applied for robots to safely navigate in them. The 3D geometric maps can be generated regardless of the lighting conditions. The proposed localization method is robust even in texture-less environments which are the toughest environments in the field of vision-based localization.博士(ĺ·Ąĺ¦)ćł•ć”żĺ¤§ĺ¦ (Hosei University
Cartographie dense basée sur une représentation compacte RGB-D dédiée à la navigation autonome
Our aim is concentrated around building ego-centric topometric maps represented as a graph of keyframe nodes which can be efficiently used by autonomous agents. The keyframe nodes which combines a spherical image and a depth map (augmented visual sphere) synthesises information collected in a local area of space by an embedded acquisition system. The representation of the global environment consists of a collection of augmented visual spheres that provide the necessary coverage of an operational area. A "pose" graph that links these spheres together in six degrees of freedom, also defines the domain potentially exploitable for navigation tasks in real time. As part of this research, an approach to map-based representation has been proposed by considering the following issues : how to robustly apply visual odometry by making the most of both photometric and ; geometric information available from our augmented spherical database ; how to determine the quantity and optimal placement of these augmented spheres to cover an environment completely ; how tomodel sensor uncertainties and update the dense infomation of the augmented spheres ; how to compactly represent the information contained in the augmented sphere to ensure robustness, accuracy and stability along an explored trajectory by making use of saliency maps.Dans ce travail, nous proposons une représentation efficace de l’environnement adaptée à la problématique de la navigation autonome. Cette représentation topométrique est constituée d’un graphe de sphères de vision augmentées d’informations de profondeur. Localement la sphère de vision augmentée constitue une représentation égocentrée complète de l’environnement proche. Le graphe de sphères permet de couvrir un environnement de grande taille et d’en assurer la représentation. Les "poses" à 6 degrés de liberté calculées entre sphères sont facilement exploitables par des tâches de navigation en temps réel. Dans cette thèse, les problématiques suivantes ont été considérées : Comment intégrer des informations géométriques et photométriques dans une approche d’odométrie visuelle robuste ; comment déterminer le nombre et le placement des sphères augmentées pour représenter un environnement de façon complète ; comment modéliser les incertitudes pour fusionner les observations dans le but d’augmenter la précision de la représentation ; comment utiliser des cartes de saillances pour augmenter la précision et la stabilité du processus d’odométrie visuelle
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available