25 research outputs found
GPGM-SLAM: a Robust SLAM System for Unstructured Planetary Environments with Gaussian Process Gradient Maps
Simultaneous Localization and Mapping (SLAM) techniques play a key role towards long-term autonomy of mobile robots due to the ability to correct localization errors and produce consistent maps of an environment over time. Contrarily to urban or man-made environments, where the presence of unique objects and structures offer unique cues for localization, the apperance of
unstructured natural environments is often ambiguous and self-similar, hindering the performances of loop closure detection. In this paper, we present an approach to improve the robustness of place
recognition in the context of a submap-based stereo SLAM based on Gaussian Process Gradient Maps (GPGMaps). GPGMaps embed a continuous representation of the gradients of the local terrain
elevation by means of Gaussian Process regression and Structured Kernel Interpolation, given solely noisy elevation measurements. We leverage the imagelike structure of GPGMaps to detect loop
closures using traditional visual features and Bag of Words. GPGMap matching is performed as an SE(2) alignment to establish loop closure constraints within a pose graph. We evaluate the
proposed pipeline on a variety of datasets recorded on Mt. Etna, Sicily and in the Morocco desert, respectively Moon- and Mars-like environments, and we compare the localization performances with
state-of-the-art approaches for visual SLAM and visual loop closure detection
Rock segmentation in the navigation vision of the planetary rovers
Visual navigation is an essential part of planetary rover autonomy. Rock segmentation emerged as an important interdisciplinary topic among image processing, robotics, and mathematical modeling. Rock segmentation is a challenging topic for rover autonomy because of the high computational consumption, real-time requirement, and annotation difficulty. This research proposes a rock segmentation framework and a rock segmentation network (NI-U-Net++) to aid with the visual navigation of rovers. The framework consists of two stages: the pre-training process and the transfer-training process. The pre-training process applies the synthetic algorithm to generate the synthetic images; then, it uses the generated images to pre-train NI-U-Net++. The synthetic algorithm increases the size of the image dataset and provides pixel-level masks—both of which are challenges with machine learning tasks. The pre-training process accomplishes the state-of-the-art compared with the related studies, which achieved an accuracy, intersection over union (IoU), Dice score, and root mean squared error (RMSE) of 99.41%, 0.8991, 0.9459, and 0.0775, respectively. The transfer-training process fine-tunes the pre-trained NI-U-Net++ using the real-life images, which achieved an accuracy, IoU, Dice score, and RMSE of 99.58%, 0.7476, 0.8556, and 0.0557, respectively. Finally, the transfer-trained NI-U-Net++ is integrated into a planetary rover navigation vision and achieves a real-time performance of 32.57 frames per second (or the inference time is 0.0307 s per frame). The framework only manually annotates about 8% (183 images) of the 2250 images in the navigation vision, which is a labor-saving solution for rock segmentation tasks. The proposed rock segmentation framework and NI-U-Net++ improve the performance of the state-of-the-art models. The synthetic algorithm improves the process of creating valid data for the challenge of rock segmentation. All source codes, datasets, and trained models of this research are openly available in Cranfield Online Research Data (CORD)
Aerospace medicine and biology: A continuing bibliography with indexes (supplement 324)
This bibliography lists 200 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during May, 1989. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance
Cartographie dense basée sur une représentation compacte RGB-D dédiée à la navigation autonome
Our aim is concentrated around building ego-centric topometric maps represented as a graph of keyframe nodes which can be efficiently used by autonomous agents. The keyframe nodes which combines a spherical image and a depth map (augmented visual sphere) synthesises information collected in a local area of space by an embedded acquisition system. The representation of the global environment consists of a collection of augmented visual spheres that provide the necessary coverage of an operational area. A "pose" graph that links these spheres together in six degrees of freedom, also defines the domain potentially exploitable for navigation tasks in real time. As part of this research, an approach to map-based representation has been proposed by considering the following issues : how to robustly apply visual odometry by making the most of both photometric and ; geometric information available from our augmented spherical database ; how to determine the quantity and optimal placement of these augmented spheres to cover an environment completely ; how tomodel sensor uncertainties and update the dense infomation of the augmented spheres ; how to compactly represent the information contained in the augmented sphere to ensure robustness, accuracy and stability along an explored trajectory by making use of saliency maps.Dans ce travail, nous proposons une représentation efficace de l’environnement adaptée à la problématique de la navigation autonome. Cette représentation topométrique est constituée d’un graphe de sphères de vision augmentées d’informations de profondeur. Localement la sphère de vision augmentée constitue une représentation égocentrée complète de l’environnement proche. Le graphe de sphères permet de couvrir un environnement de grande taille et d’en assurer la représentation. Les "poses" à 6 degrés de liberté calculées entre sphères sont facilement exploitables par des tâches de navigation en temps réel. Dans cette thèse, les problématiques suivantes ont été considérées : Comment intégrer des informations géométriques et photométriques dans une approche d’odométrie visuelle robuste ; comment déterminer le nombre et le placement des sphères augmentées pour représenter un environnement de façon complète ; comment modéliser les incertitudes pour fusionner les observations dans le but d’augmenter la précision de la représentation ; comment utiliser des cartes de saillances pour augmenter la précision et la stabilité du processus d’odométrie visuelle
Aerospace medicine and biology: A continuing bibliography with indexes (supplement 325)
This bibliography lists 192 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during June, 1989. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance
Recommended from our members
Information Theory and Probabilistic Modeling For Robot Localization
This dissertation presents three contributions to visual perception for localization of mobile robots, based on probabilistic modeling and information theory.
First, we present the hidden Markov random field iterated closest point algorithm. When registering 3-D point clouds it is expected that some points in one cloud do not have corresponding points in the other cloud. These non-correspondences are likely to occur near one another, as surface regions visible from one sensor pose are obscured or out of frame for another. In this contribution, a hidden Markov random field model is used to capture this prior within the framework of the iterative closest point algorithm. The EM algorithm is used to estimate the distribution parameters and learn the hidden component memberships. By robustly inferring which points are in the overlap, the non-overlapping points can be ignored while aligning the point clouds. Experiments are presented demonstrating that this method outperforms several other outlier rejection methods when the point clouds have low or moderate overlap.
Second, we present a method for active gaze control for localization. Sparse visual-inertial odometry relies on visual features in the environment to extract useful information from cameras. However, the distribution of these features, especially in built environments, is far from uniform, leading to loss-of-tracking failures if the camera happens to be directed at a suboptimal view. Active gaze control, directing a camera towards informative features in the environment, can improve visual-inertial localization of a mobile robot. In this contribution, informative gazes are selected using a map of landmarks and the anticipated robot trajectory. We develop an attention mechanism that executes a branch-and-bound search over potential fixations to find the optimal view, and develop a heuristic for the search specific to this problem. The method is demonstrated and verified in a simulated environment and in a real dataset taken with a fisheye lens, from which a small region of interest is selected. The results indicate that a mechanically articulated camera system is a worthwhile endeavour.
Finally, we consider an information-theoretic approach to keyframing. In visual-intertial odometry, some camera frames are designated keyframes and retained in the estimation as newer (non-key) frames are discarded. We present a method for choosing between keeping the oldest keyframe or inserting a new keyframe based on the observed Fisher information of the resulting pose estimates. Unfortunately, the resulting method does not outperform existing keyframe heuristics, as the decision must be made using only the pose estimates at the current timestep, while keyframes will persist for several more steps.
In these contributions, probabilistic modeling and information theory provide the theoretical framework to advance the capabilities of robotic localization.</p
Automatic Food Intake Assessment Using Camera Phones
Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user\u27s memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies.
In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation.
This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs.
A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation.
To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors
ENG4100 - USQ Project - Jason Pont
The aim of this project was to investigate navigation methods for supporting autonomous operations on Mars. To date there have been several robotic rover missions to Mars for the purpose of scientific exploration. These missions have relied heavily on human input for navigation due to the limited confidence in computer decision-making and the difficulty in localising an unknown environment with limited supporting infrastructure, such as satellite navigation. By increasing the confidence in the performance of an autonomous rover on Mars, this project will contribute to increasing the efficiency of future missions by reducing or removing humans from the control loop.
Due to the signal propagation delay between Earth and Mars, a certain level of autonomy is required to ensure a rover can continue operating while awaiting instructions from a human on Earth. However, due to the level of risk in relying solely on automation, there is still considerable human intervention. This can result in significant downtime when awaiting a decision by a human operator on Earth. While acceptable for scientific missions, greater autonomy will be required for routine Mars operations.
The project reviewed systems and sensors that have been used on previous robotic missions to Mars and other experiments on Earth. The most appropriate systems were assembled into a simulated test environment consisting of a small rover, an overhead camera that might be carried by a drone or balloon and wireless communications between the systems. A machine vision algorithm was developed to test the concept of an overhead camera mounted on a drone or balloon, while evaluating different path-planning algorithms for speed in navigating a previously unknown environment. An experimental system was built consisting of a rover, fixed overhead camera and communications between them. The machine vision algorithm was used to send instructions to the rover which could then follow a path through a test environment with different obstacle densities. Two different path-finding algorithms were tested with the system.
The key outcomes of the project were the construction and testing of the system. The rover could navigate, rotate towards and travel to a target location, after receiving instructions via serial radio communications. The rover could also detect obstacles using an ultrasonic sensor and send this information back to the machine vision algorithm. The algorithm would then update the path with the new information received on obstacle locations and the rover would then follow the new path to the target location. By successfully testing the concept, the project showed that this system could be used to support future scientific missions, resource gathering and preparation for human exploration of Mars