38 research outputs found

    Automatic coarse co-registration of point clouds from diverse scan geometries: a test of detectors and descriptors

    Full text link
    Point clouds are collected nowadays from a plethora of sensors, some having higher accuracies and higher costs, some having lower accuracies but also lower costs. Not only there is a large choice for different sensors, but also these can be transported by different platforms, which can provide different scan geometries. In this work we test the extraction of four different keypoint detectors and three feature descriptors. We benchmark performance in terms of calculation time and we assess their performance in terms of accuracy in their ability in coarse automatic co-registration of two clouds that are collected with different sensors, platforms and scan geometries. One, which we define as having the higher accuracy, and thus will be used as reference, was surveyed via a UAV flight with a Riegl MiniVUX-3, the other on a bicycle with a Livox Horizon over a walking path with un-even ground.The novelty in this work consists in comparing several strategies for fast alignment of point clouds from very different surveying geometries, as the drone has a bird's eye view and the bicycle a ground-based view. An added challenge is related to the lower cost of the bicycle sensor ensemble that, together with the rough terrain, reasonably results in lower accuracy of the survey. The main idea is to use range images to capture a simplified version of the geometry of the surveyed area and then find the best features to match keypoints. Results show that NARF features detected more keypoints and resulted in a faster co-registration procedure in this scenariowhereas the accuracy of the co-registration is similar to all the combinations of keypoint detectors and features

    Evaluation of 3D CNN Semantic Mapping for Rover Navigation

    Full text link
    Terrain assessment is a key aspect for autonomous exploration rovers, surrounding environment recognition is required for multiple purposes, such as optimal trajectory planning and autonomous target identification. In this work we present a technique to generate accurate three-dimensional semantic maps for Martian environment. The algorithm uses as input a stereo image acquired by a camera mounted on a rover. Firstly, images are labeled with DeepLabv3+, which is an encoder-decoder Convolutional Neural Networl (CNN). Then, the labels obtained by the semantic segmentation are combined to stereo depth-maps in a Voxel representation. We evaluate our approach on the ESA Katwijk Beach Planetary Rover Dataset.Comment: To be presented at the 7th IEEE International Workshop on Metrology for Aerospace (MetroAerospace

    Metrological characterization of a vision-based system for relative pose measurements with fiducial marker mapping for spacecrafts

    Get PDF
    An improved approach for the measurement of the relative pose between a target and a chaser spacecraft is presented. The selected method is based on a single camera, which can be mounted on the chaser, and a plurality of fiducial markers, which can be mounted on the external surface of the target. The measurement procedure comprises of a closed-form solution of the Perspective from n Points (PnP) problem, a RANdom SAmple Consensus (RANSAC) procedure, a non-linear local optimization and a global Bundle Adjustment refinement of the marker map and relative poses. A metrological characterization of the measurement system is performed using an experimental set-up that can impose rotations combined with a linear translation and can measure them. The rotation and position measurement errors are calculated with reference instrumentations and their uncertainties are evaluated by the Monte Carlo method. The experimental laboratory tests highlight the significant improvements provided by the Bundle Adjustment refinement. Moreover, a set of possible influencing physical parameters are defined and their correlations with the rotation and position errors and uncertainties are analyzed. Using both numerical quantitative correlation coefficients and qualitative graphical representations, the most significant parameters for the final measurement errors and uncertainties are determined. The obtained results give clear indications and advice for the design of future measurement systems and for the selection of the marker positioning on a satellite surface

    Simulation Framework for Mobile Robots in Planetary-Like Environments

    Full text link
    In this paper we present a simulation framework for the evaluation of the navigation and localization metrological performances of a robotic platform. The simulator, based on ROS (Robot Operating System) Gazebo, is targeted to a planetary-like research vehicle which allows to test various perception and navigation approaches for specific environment conditions. The possibility of simulating arbitrary sensor setups comprising cameras, LiDARs (Light Detection and Ranging) and IMUs makes Gazebo an excellent resource for rapid prototyping. In this work we evaluate a variety of open-source visual and LiDAR SLAM (Simultaneous Localization and Mapping) algorithms in a simulated Martian environment. Datasets are captured by driving the rover and recording sensors outputs as well as the ground truth for a precise performance evaluation.Comment: To be presented at the 7th IEEE International Workshop on Metrology for Aerospace (MetroAerospace

    Occupancy grid mapping for rover navigation based on semantic segmentation

    Get PDF
    Obstacle mapping is a fundamental building block of the autonomous navigation pipeline of many robotic platforms such as planetary rovers. Nowadays, occupancy grid mapping is a widely used tool for obstacle perception. It foreseen the representation of the environment in evenly spaced cells, whose posterior probability of being occupied is updated based on range sensors measurement. In more classic approaches, the cells are updated to occupied at the point where the ray emitted by the range sensor encounters an obstacle, such as a wall. The main limitation of this kind of methods is that they are not able to identify planar obstacles, such as slippery, sandy, or rocky soils. In this work, we use the measurements of a stereo camera combined with a pixel labeling technique based on Convolution Neural Networks to identify the presence of rocky obstacles in planetary environment. Once identified, the obstacles are converted into a scan-like model. The estimation of the relative pose between successive frames is carried out using ORB-SLAM algorithm. The final step consists of updating the occupancy grid map using the Bayes' update Rule. To evaluate the metrological performances of the proposed method images from the Martian analogous dataset, the ESA Katwijk Beach Planetary Rover Dataset have been used. The evaluation has been performed by comparing the generated occupancy map with a manually segmented ortomosaic map, obtained by drones' survey of the area used as reference

    Visual odometry and vision system measurements based algorithms for rover navigation

    Get PDF
    Planetary exploration rovers should be capable of operating autonomously also for long paths with minimal human input. Control operations must be minimized in order to reduce traverse time, optimize the resources allocated for telecommunications and maximize the scientific output of the mission. Knowing the goal position and considering the vehicle dynamics, control algorithms have to provide the appropriate inputs to actuators. Path planning algorithms use three-dimensional models of the surrounding terrain in order to safely avoid obstacles. Moreover, rovers, for the sample and return missions planned for the next years, have to demonstrate the capability to return to a previously visited place for sampling scientific data or to return a sample to an ascent vehicle. Motion measurement is a fundamental task in rover control, and planetary environment presents some specific issues. Wheel odometry has wide uncertainty due to slippage of wheels on a sandy surface, inertial measurement has drift problems and GPS-like positioning systems is not available on extraterrestrial planets. Vision systems have demonstrated to be reliable and accurate motion tracking measurement methods. One of these methods is stereo Visual Odometry. Stereo-processing allows estimation of the three-dimensional location of landmarks observed by a pair of cameras by means of triangulation. Point cloud matching between two subsequent frames allows stereo-camera motion computation. Thanks to Visual SLAM (Simultaneous Localization and Mapping) techniques a rover is able to reconstruct a consistent map of the environment and to localize itself with reference to this map. SLAM technique presents two main advantages: the map of the environment construction and a more accurate motion tracking, thanks to the solutions of a large minimization problem which involves multiple camera poses and measurements of map landmarks. After rover touchdown, one of the key tasks requested to the operations center is the accurate measurement of the rover position on the inertial and fixed coordinate systems, such as the J2000 frame and the Mars Body-Fixed (MBF) frame. For engineering and science operations, high precision global localization and detailed Digital Elevation Models (DEM) of the landing site are crucial. The first part of this dissertation treats the problem of localizing a rover with respect to a satellite geo-referenced and ortho-rectified images, and the localization with respect to a digital elevation model (DEM) realized starting from satellite images A sensitivity analysis of the Visual Position Estimator for Rover (VIPER) algorithm outputs is presented. By comparing the local skyline, extracted form a panoramic image, and a skyline rendered from a Digital Elevation Model (DEM), the algorithm retrieve the camera position and orientation relative to the DEM map. This algorithm has been proposed as part of the localization procedure realized by the Rover Operation Control Center (ROCC), located in ALTEC, to localize ExoMars 2020 rover after landing and as initialization and verification of rover guidance and navigation outputs. Images from Mars Exploration Rover mission and HiRISE DEM have been used to test the algorithm performances. During rover traverse, Visual Odometry methods could be used as an asset to refine the path estimation. The second part of this dissertation treats an experimental analysis of how landmark distributions in a scene, as observed by a stereo-camera, affect Visual Odometry measurement performances. Translational and rotational tests have been performed in many different positions in an indoor environment. The Visual Odometry algorithm, which has been implemented, firstly guesses motion by a linear 3D-to-3D method embedded within a RANdom SAmple Consensus (RANSAC) process to remove outliers. Then, motion estimation is computed from the inliers by minimizing the Euclidean distance between the triangulated landmarks. The last part of this dissertation has been developed in collaboration with NASA Jet Propulsion Laboratory and presents an innovative visual localization method for hopping and tumbling platforms. These new mobility systems for the exploration of comets, asteroids, and other small Solar System bodies, require new approaches for localization. The choice of a monocular onboard camera for perception is constrained by the rover’s limited weight and size. Visual localization near the surface of small bodies is difficult due to large scale changes, frequent occlusions, high-contrast, rapidly changing shadows and relatively featureless terrains. A synergistic localization and mapping approach between the mother spacecraft and the deployed hopping/tumbling daughter-craft rover has been studied and developed. We have evaluated various open-source visual SLAM algorithms. Between them, ORB-SLAM2 has been chosen and adapted for this application. The possibility to save the map made by orbiter observations and re-load it for rover localization has been introduced. Moreover, now it is possible to fuse the map with other orbiter sensor pose measurement. Collaborative localization method accuracy has been estimated. A series of realistic images of an asteroid mockup have been captured and a Vicon system has been used in order to give the trajectory ground truth. In addition, we had evaluated this method robustness to illumination changes.I rover marziani e, più in generale, i robot per l’esplorazione di asteroidi e piccoli corpi celesti, richiedono un alto livello di autonomia. Il controllo da parte di un operatore deve essere ridotto al minimo, al fine di ridurre i tempi di percorrenza, ottimizzare le risorse allocate per le tele-comunicazioni e massimizzare l’output scientifico della missione. Conoscendo la posizione obiettivo e considerando la dinamica del veicolo, gli algoritmi di controllo forniscono gli input adeguati agli attuatori. Algoritmi di pianificazione della traiettoria, sfruttando modelli tridimensionali del terreno circostante, evitano gli ostacoli con ampi margini di sicurezza. Inoltre i rover per le missioni di sample and return, previste per i prossimi anni, devono dimostrare la capacità di tornare in un luogo già visitato per il campionamento di dati scientifici o per riportare i campioni raccolti ad un veicolo di risalita. In tutte queste task la stima del moto risulta essere fondamentale. La stima del moto su altri pianeti ha la sua peculiarità. L’odometria tramite encoder, infatti, presenta elevate incertezze a causa dello slittamento delle ruote su superfici sabbiose o scivolose; i sistemi di navigazione inerziale, nel caso della dinamica lenta dei rover, presentano derive non tollerabili per una stima accurata dell’assetto; infine non sono disponibili sistemi di posizionamento globale analoghi al GPS. Sistemi della stima del moto basati su telecamere hanno dimostrato, già con le missioni MER della NASA, di essere affidabili e accurati. Uno di questi sistemi è l’odometria visuale stereo. In questo algoritmo il moto è stimato calcolando la roto-traslazione di due nuvole di punti misurate a due istanti successivi. La nuvola di punti è generata tramite triangolazione di punti salienti presenti nelle due immagini. Grazie a tecniche di Simultaneous Localization and Mapping (SLAM) si dà la capacità ad un rover di costruire una mappa dell’ambiente circostante e di localizzarsi rispetto ad essa. Le tecniche di SLAM presentano due vantaggi: la costruzione della mappa e una stima della traiettoria più accurata, grazie alla soluzione di problemi di minimizzazione che coinvolgono la stima di più posizioni e landmark allo stesso tempo. Subito dopo l’atterraggio, una delle task principali che devono essere svolte dal centro operativo per il controllo di rover è il calcolo accurato della posizione del lander/rover rispetto al sisma di riferimento inerziale e il sistema di riferimento solidale al pianeta, come il sistema J2000 e il Mars Body-Fixed (MBF) frame. Sia per le operazioni scientifiche che ingegneristiche risulta fondamentale la localizzazione accurata rispetto a immagini satellitari e a modelli tridimensionali della zona di atterraggio. Nella prima parte della tesi viene trattato il problema della localizzazione di un rover rispetto ad un’immagine satellitare geo referenziata e orto rettificata e la localizzazione rispetto ad un modello di elevazione digitale (DEM), realizzato da immagini satellitari. È stata svolta l’analisi di una versione modificata dell’algoritmo Visual Position Estimator for Rover (VIPER). L’algoritmo trova la posizione e l’assetto di un rover rispetto ad un DEM, comparando la linea d’orizzonte locale con le linee d’orizzonte calcolate in posizioni a priori del DEM. Queste analisi sono state svolte in collaborazione con ALTEC S.p.A., con lo scopo di definire le operazioni che il Rover Operation Control Center (ROCC) dovrà svolgere per la localizzazione del rover ExoMars 2020. Una volta effettuate le operazioni di localizzazione, questi metodi possono essere nuovamente utilizzati come verifica e correzione della stima della traiettoria. Nella seconda parte della dissertazione è presentato un metodo di odometria visuale stereo per rover ed un’analisi di come la distribuzione dei landmark triangolati influisca sulla stima del moto. A questo scopo sono stati svolti dei test in laboratorio, variando la distanza della scena. L’algoritmo di odometria visiva implementato è un metodo 3D-to-3D con rimozione dei falsi positivi tramite procedura di RANdom SAmple Consensus. La stima del moto è effettuata minimizzando la distanza euclidea tra le due nuvole di punti. L’ultima parte di questa dissertazione è stata sviluppata in collaborazione con il Jet Propulsion Laboratory (NASA) e presenta un sistema di localizzazione per rover hopping/tumbling per l’esplorazione di comete e asteroidi. Tali sistemi innovativi richiedono nuovi approcci per la localizzazione. Viste le risorse limitate di spazio, peso e energia disponibile e le limitate capacità computazionali, si è scelto di basare il sistema di localizzazione su una monocamera. La localizzazione visuale in prossimità di una cometa, inoltre, presenta alcune peculiarità che la rendono più difficoltosa. Questo a causa dei grandi cambiamenti di scala che si presentano durante il movimento della piattaforma, le frequenti occlusioni del campo di vista, la presenza di ombre nette che cambiano con il periodo di rotazione dell’asteroide e la caratteristica visiva del terreno, che risulta essere omogeno nel campo del visibile. È stato proposto un sistema di visual SLAM collaborativo tra il rover tumbling/hopping e il satellite “madre”, che ha portato il rover nell’orbita di rilascio. È stato effettuato lo stato dell’arte dei più recenti algoritmi di visual SLAM open-source e, dopo un’accurata analisi, si è optato per l’utilizzo di ORB-SLAM2, che è stato modificato per far fronte al tipo di applicazione richiesta. È stata introdotta la possibilità di salvare la mappa realizzata dall’orbiter, che viene utilizzata dal rover per la sua localizzazione. È possibile, inoltre, fondere la mappa realizzata da orbiter con altre misure d’assetto provenienti da altri sensori a bordo dell’orbiter. L’accuratezza di tale metodo è stata valutata utilizzando una sequenza di immagini raccolta in ambiente rappresentativo e utilizzando un sistema di riferimento esterno. Sono state effettuate simulazioni della fase di mappatura dell’asteroide e localizzazione della piattaforma hopping/tumbling e, infine, è stato valutato come migliorare le performances di questo metodo, in seguito al cambiamento delle condizioni di illuminazione

    Progettazione, realizzazione e taratura di una sonda per la misura della velocità di un payload a bordo di pallone stratosferico e ricostruzione dell'assetto e della traiettoria

    Get PDF
    Questo lavoro di tesi tratta della progettazione, realizzazione e taratura di una sonda per la misura della velocità di discesa di un payload. Tale sonda è stata realizzata nell'ambito del progetto MISSUS, che ha volato a bordo del pallone stratosferico BEXUS 15 dell'ESA. Con il fine di interpretare le misure effettuate dall'esperimento sono stati ricostruiti il suo assetto e la sua traiettoria. I dati campionati sono stati trattati tramite l'analisi wavelet multi-stadi

    3D Radiometric Mapping by Means of LiDAR SLAM and Thermal Camera Data Fusion

    No full text
    The ability to produce 3D maps with infrared radiometric information is of great interest for many applications, such as rover navigation, industrial plant monitoring, and rescue robotics. In this paper, we present a system for large-scale thermal mapping based on IR thermal images and 3D LiDAR point cloud data fusion. The alignment between the point clouds and the thermal images is carried out using the extrinsic camera-to-LiDAR parameters, obtained by means of a dedicated calibration process. Rover’s trajectory, which is necessary for point cloud registration, is obtained by means of a LiDAR Simultaneous Localization and Mapping (SLAM) algorithm. Finally, the registered and merged thermal point clouds are represented through an OcTree data structure, where each voxel is associated with the average temperature of the 3D points contained within. Furthermore, the paper presents in detail the method for determining extrinsic parameters, which is based on the identification of a hot cardboard box. Both methods were validated in a laboratory environment and outdoors. It is shown that the developed system is capable of locating a thermal object with an accuracy of up to 9 cm in a 45 m map size with a voxelization of 14 cm
    corecore