1,817 research outputs found

    Evaluation of an Area-Based matching algorithm with advanced shape models

    Get PDF
    Nowadays, the scientific institutions involved in planetary mapping are working on new strategies to produce accurate high resolution DTMs from space images at planetary scale, usually dealing with extremely large data volumes. From a methodological point of view, despite the introduction of a series of new algorithms for image matching (e.g. the Semi Global Matching) that yield superior results (especially because they produce usually smooth and continuous surfaces) with lower processing times, the preference in this field still goes to well established area-based matching techniques. Many efforts are consequently directed to improve each phase of the photogrammetric process, from image pre-processing to DTM interpolation. In this context, the Dense Matcher software (DM) developed at the University of Parma has been recently optimized to cope with very high resolution images provided by the most recent missions (LROC NAC and HiRISE) focusing the efforts mainly to the improvement of the correlation phase and the process automation. Important changes have been made to the correlation algorithm, still maintaining its high performance in terms of precision and accuracy, by implementing an advanced version of the Least Squares Matching (LSM) algorithm. In particular, an iterative algorithm has been developed to adapt the geometric transformation in image resampling using different shape functions as originally proposed by other authors in different applications

    A Systematic Solution to Multi-Instrument Coregistration of High-Resolution Planetary Images to an Orthorectified Baseline

    Get PDF
    We address the problem of automatically coregistering planetary images to a common baseline, introducing a novel generic technique that achieves an unprecedented robustness to different image inputs, thus making batch-mode coregistration achievable without requiring the usual parameter tweaking. We introduce a novel image matching technique, which boosts matching performance even under the most strenuous circumstances, and experimentally demonstrate validation through an extensive experimental multi-instrument setup that includes images from eight high-resolution data sets of the Mars and the Moon. The technique is further tested in a batch-mode processing, in which approximately 1.6% of all high-resolution Martian imagery is coregistered to a common baseline

    Massive stereo-based DTM production for Mars on cloud computers

    Get PDF
    Digital Terrain Model (DTM) creation is essential to improving our understanding of the formation processes of the Martian surface. Although there have been previous demonstrations of open-source or commercial planetary 3D reconstruction software, planetary scientists are still struggling with creating good quality DTMs that meet their science needs, especially when there is a requirement to produce a large number of high quality DTMs using "free" software. In this paper, we describe a new open source system to overcome many of these obstacles by demonstrating results in the context of issues found from experience with several planetary DTM pipelines. We introduce a new fully automated multi-resolution DTM processing chain for NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) stereo processing, called the Co-registration Ames Stereo Pipeline (ASP) Gotcha Optimised (CASP-GO), based on the open source NASA ASP. CASP-GO employs tie-point based multi-resolution image co-registration, and Gotcha sub-pixel refinement and densification. CASP-GO pipeline is used to produce planet-wide CTX and HiRISE DTMs that guarantee global geo-referencing compliance with respect to High Resolution Stereo Colour imaging (HRSC), and thence to the Mars Orbiter Laser Altimeter (MOLA); providing refined stereo matching completeness and accuracy. All software and good quality products introduced in this paper are being made open-source to the planetary science community through collaboration with NASA Ames, United States Geological Survey (USGS) and the Jet Propulsion Laboratory (JPL), Advanced Multi-Mission Operations System (AMMOS) Planetary Data System (PDS) Pipeline Service (APPS-PDS4), as well as browseable and visualisable through the iMars web based Geographic Information System (webGIS) system

    PERFORMANCE EVALUATION OF 3DPD, THE PHOTOGRAMMETRIC PIPELINE FOR THE CASSIS STEREO IMAGES

    Get PDF
    A novel photogrammetric pipeline has been designed by INAF-Padova for the processing of the recent stereo images of CaSSIS and it will be a starting point for the future procedures that will be applied to Stereo Camera (STC) (Cremonese, 2009; Da Deppo, 2010) images for the BepiColombo mission to Mercury. The large number of stereo pairs being generated has made it necessary that several teams attempt to generate products. The presented procedures are the two strategies (proposed by INAF-PADOVA and by EPFLLausanne) available nowadays in an international attempt to generate 3D products from the CaSSIS images. The comparisons here presented will be the first of several such efforts and are important to make the planetary community aware of the accuracy of the 3D data available. Furthermore, the possibility to consider higher accuracy DTMs as the ones of HiRISE makes the quality assessment of stereo products of CaSSIS robust and important for the assessment of data to be provided to the scientific community. The performance evaluation of the INAF-Padova pipeline (3DPD software) is the main objectives of this work. Additionally, the comparison between the correlation phase of 3DPD and of ASP (Moratto, 2010) that is integrated in the EPFL pipeline has been considered

    Elevation and Deformation Extraction from TomoSAR

    Get PDF
    3D SAR tomography (TomoSAR) and 4D SAR differential tomography (Diff-TomoSAR) exploit multi-baseline SAR data stacks to provide an essential innovation of SAR Interferometry for many applications, sensing complex scenes with multiple scatterers mapped into the same SAR pixel cell. However, these are still influenced by DEM uncertainty, temporal decorrelation, orbital, tropospheric and ionospheric phase distortion and height blurring. In this thesis, these techniques are explored. As part of this exploration, the systematic procedures for DEM generation, DEM quality assessment, DEM quality improvement and DEM applications are first studied. Besides, this thesis focuses on the whole cycle of systematic methods for 3D & 4D TomoSAR imaging for height and deformation retrieval, from the problem formation phase, through the development of methods to testing on real SAR data. After DEM generation introduction from spaceborne bistatic InSAR (TanDEM-X) and airborne photogrammetry (Bluesky), a new DEM co-registration method with line feature validation (river network line, ridgeline, valley line, crater boundary feature and so on) is developed and demonstrated to assist the study of a wide area DEM data quality. This DEM co-registration method aligns two DEMs irrespective of the linear distortion model, which improves the quality of DEM vertical comparison accuracy significantly and is suitable and helpful for DEM quality assessment. A systematic TomoSAR algorithm and method have been established, tested, analysed and demonstrated for various applications (urban buildings, bridges, dams) to achieve better 3D & 4D tomographic SAR imaging results. These include applying Cosmo-Skymed X band single-polarisation data over the Zipingpu dam, Dujiangyan, Sichuan, China, to map topography; and using ALOS L band data in the San Francisco Bay region to map urban building and bridge. A new ionospheric correction method based on the tile method employing IGS TEC data, a split-spectrum and an ionospheric model via least squares are developed to correct ionospheric distortion to improve the accuracy of 3D & 4D tomographic SAR imaging. Meanwhile, a pixel by pixel orbit baseline estimation method is developed to address the research gaps of baseline estimation for 3D & 4D spaceborne SAR tomography imaging. Moreover, a SAR tomography imaging algorithm and a differential tomography four-dimensional SAR imaging algorithm based on compressive sensing, SAR interferometry phase (InSAR) calibration reference to DEM with DEM error correction, a new phase error calibration and compensation algorithm, based on PS, SVD, PGA, weighted least squares and minimum entropy, are developed to obtain accurate 3D & 4D tomographic SAR imaging results. The new baseline estimation method and consequent TomoSAR processing results showed that an accurate baseline estimation is essential to build up the TomoSAR model. After baseline estimation, phase calibration experiments (via FFT and Capon method) indicate that a phase calibration step is indispensable for TomoSAR imaging, which eventually influences the inversion results. A super-resolution reconstruction CS based study demonstrates X band data with the CS method does not fit for forest reconstruction but works for reconstruction of large civil engineering structures such as dams and urban buildings. Meanwhile, the L band data with FFT, Capon and the CS method are shown to work for the reconstruction of large manmade structures (such as bridges) and urban buildings

    High-performance hardware accelerators for image processing in space applications

    Get PDF
    Mars is a hard place to reach. While there have been many notable success stories in getting probes to the Red Planet, the historical record is full of bad news. The success rate for actually landing on the Martian surface is even worse, roughly 30%. This low success rate must be mainly credited to the Mars environment characteristics. In the Mars atmosphere strong winds frequently breath. This phenomena usually modifies the lander descending trajectory diverging it from the target one. Moreover, the Mars surface is not the best place where performing a safe land. It is pitched by many and close craters and huge stones, and characterized by huge mountains and hills (e.g., Olympus Mons is 648 km in diameter and 27 km tall). For these reasons a mission failure due to a landing in huge craters, on big stones or on part of the surface characterized by a high slope is highly probable. In the last years, all space agencies have increased their research efforts in order to enhance the success rate of Mars missions. In particular, the two hottest research topics are: the active debris removal and the guided landing on Mars. The former aims at finding new methods to remove space debris exploiting unmanned spacecrafts. These must be able to autonomously: detect a debris, analyses it, in order to extract its characteristics in terms of weight, speed and dimension, and, eventually, rendezvous with it. In order to perform these tasks, the spacecraft must have high vision capabilities. In other words, it must be able to take pictures and process them with very complex image processing algorithms in order to detect, track and analyse the debris. The latter aims at increasing the landing point precision (i.e., landing ellipse) on Mars. Future space-missions will increasingly adopt Video Based Navigation systems to assist the entry, descent and landing (EDL) phase of space modules (e.g., spacecrafts), enhancing the precision of automatic EDL navigation systems. For instance, recent space exploration missions, e.g., Spirity, Oppurtunity, and Curiosity, made use of an EDL procedure aiming at following a fixed and precomputed descending trajectory to reach a precise landing point. This approach guarantees a maximum landing point precision of 20 km. By comparing this data with the Mars environment characteristics, it is possible to understand how the mission failure probability still remains really high. A very challenging problem is to design an autonomous-guided EDL system able to even more reduce the landing ellipse, guaranteeing to avoid the landing in dangerous area of Mars surface (e.g., huge craters or big stones) that could lead to the mission failure. The autonomous behaviour of the system is mandatory since a manual driven approach is not feasible due to the distance between Earth and Mars. Since this distance varies from 56 to 100 million of km approximately due to the orbit eccentricity, even if a signal transmission at the light speed could be possible, in the best case the transmission time would be around 31 minutes, exceeding so the overall duration of the EDL phase. In both applications, algorithms must guarantee self-adaptability to the environmental conditions. Since the Mars (and in general the space) harsh conditions are difficult to be predicted at design time, these algorithms must be able to automatically tune the internal parameters depending on the current conditions. Moreover, real-time performances are another key factor. Since a software implementation of these computational intensive tasks cannot reach the required performances, these algorithms must be accelerated via hardware. For this reasons, this thesis presents my research work done on advanced image processing algorithms for space applications and the associated hardware accelerators. My research activity has been focused on both the algorithm and their hardware implementations. Concerning the first aspect, I mainly focused my research effort to integrate self-adaptability features in the existing algorithms. While concerning the second, I studied and validated a methodology to efficiently develop, verify and validate hardware components aimed at accelerating video-based applications. This approach allowed me to develop and test high performance hardware accelerators that strongly overcome the performances of the actual state-of-the-art implementations. The thesis is organized in four main chapters. Chapter 2 starts with a brief introduction about the story of digital image processing. The main content of this chapter is the description of space missions in which digital image processing has a key role. A major effort has been spent on the missions in which my research activity has a substantial impact. In particular, for these missions, this chapter deeply analizes and evaluates the state-of-the-art approaches and algorithms. Chapter 3 analyzes and compares the two technologies used to implement high performances hardware accelerators, i.e., Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs). Thanks to this information the reader may understand the main reasons behind the decision of space agencies to exploit FPGAs instead of ASICs for high-performance hardware accelerators in space missions, even if FPGAs are more sensible to Single Event Upsets (i.e., transient error induced on hardware component by alpha particles and solar radiation in space). Moreover, this chapter deeply describes the three available space-grade FPGA technologies (i.e., One-time Programmable, Flash-based, and SRAM-based), and the main fault-mitigation techniques against SEUs that are mandatory for employing space-grade FPGAs in actual missions. Chapter 4 describes one of the main contribution of my research work: a library of high-performance hardware accelerators for image processing in space applications. The basic idea behind this library is to offer to designers a set of validated hardware components able to strongly speed up the basic image processing operations commonly used in an image processing chain. In other words, these components can be directly used as elementary building blocks to easily create a complex image processing system, without wasting time in the debug and validation phase. This library groups the proposed hardware accelerators in IP-core families. The components contained in a same family share the same provided functionality and input/output interface. This harmonization in the I/O interface enables to substitute, inside a complex image processing system, components of the same family without requiring modifications to the system communication infrastructure. In addition to the analysis of the internal architecture of the proposed components, another important aspect of this chapter is the methodology used to develop, verify and validate the proposed high performance image processing hardware accelerators. This methodology involves the usage of different programming and hardware description languages in order to support the designer from the algorithm modelling up to the hardware implementation and validation. Chapter 5 presents the proposed complex image processing systems. In particular, it exploits a set of actual case studies, associated with the most recent space agency needs, to show how the hardware accelerator components can be assembled to build a complex image processing system. In addition to the hardware accelerators contained in the library, the described complex system embeds innovative ad-hoc hardware components and software routines able to provide high performance and self-adaptable image processing functionalities. To prove the benefits of the proposed methodology, each case study is concluded with a comparison with the current state-of-the-art implementations, highlighting the benefits in terms of performances and self-adaptability to the environmental conditions

    Adaptive Localization and Mapping for Planetary Rovers

    Get PDF
    Future rovers will be equipped with substantial onboard autonomy as space agencies and industry proceed with missions studies and technology development in preparation for the next planetary exploration missions. Simultaneous Localization and Mapping (SLAM) is a fundamental part of autonomous capabilities and has close connections to robot perception, planning and control. SLAM positively affects rover operations and mission success. The SLAM community has made great progress in the last decade by enabling real world solutions in terrestrial applications and is nowadays addressing important challenges in robust performance, scalability, high-level understanding, resources awareness and domain adaptation. In this thesis, an adaptive SLAM system is proposed in order to improve rover navigation performance and demand. This research presents a novel localization and mapping solution following a bottom-up approach. It starts with an Attitude and Heading Reference System (AHRS), continues with a 3D odometry dead reckoning solution and builds up to a full graph optimization scheme which uses visual odometry and takes into account rover traction performance, bringing scalability to modern SLAM solutions. A design procedure is presented in order to incorporate inertial sensors into the AHRS. The procedure follows three steps: error characterization, model derivation and filter design. A complete kinematics model of the rover locomotion subsystem is developed in order to improve the wheel odometry solution. Consequently, the parametric model predicts delta poses by solving a system of equations with weighed least squares. In addition, an odometry error model is learned using Gaussian processes (GPs) in order to predict non-systematic errors induced by poor traction of the rover with the terrain. The odometry error model complements the parametric solution by adding an estimation of the error. The gained information serves to adapt the localization and mapping solution to the current navigation demands (domain adaptation). The adaptivity strategy is designed to adjust the visual odometry computational load (active perception) and to influence the optimization back-end by including highly informative keyframes in the graph (adaptive information gain). Following this strategy, the solution is adapted to the navigation demands, providing an adaptive SLAM system driven by the navigation performance and conditions of the interaction with the terrain. The proposed methodology is experimentally verified on a representative planetary rover under realistic field test scenarios. This thesis introduces a modern SLAM system which adapts the estimated pose and map to the predicted error. The system maintains accuracy with fewer nodes, taking the best of both wheel and visual methods in a consistent graph-based smoothing approach
    • …
    corecore