6,239 research outputs found

    High-resolution SAR images for fire susceptibility estimation in urban forestry

    Get PDF
    We present an adaptive system for the automatic assessment of both physical and anthropic fire impact factors on periurban forestries. The aim is to provide an integrated methodology exploiting a complex data structure built upon a multi resolution grid gathering historical land exploitation and meteorological data, records of human habits together with suitably segmented and interpreted high resolution X-SAR images, and several other information sources. The contribution of the model and its novelty rely mainly on the definition of a learning schema lifting different factors and aspects of fire causes, including physical, social and behavioural ones, to the design of a fire susceptibility map, of a specific urban forestry. The outcome is an integrated geospatial database providing an infrastructure that merges cartography, heterogeneous data and complex analysis, in so establishing a digital environment where users and tools are interactively connected in an efficient and flexible way

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Application of DInSAR-GPS optimization for derivation of fine-scale surface motion maps of Southern California

    Get PDF
    A method based on random field theory and Gibbs-Markov random fields equivalency within Bayesian statistical framework is used to derive 3-D surface motion maps from sparse global positioning system (GPS) measurements and differential interferometric synthetic aperture radar (DInSAR) interferogram in the southern California region. The minimization of the Gibbs energy function is performed analytically, which is possible in the case when neighboring pixels are considered independent. The problem is well posed and the solution is unique and stable and not biased by the continuity condition. The technique produces a 3-D field containing estimates of surface motion on the spatial scale of the DInSAR image, over a given time period, complete with error estimates. Significant improvement in the accuracy of the vertical component and moderate improvement in the accuracy of the horizontal components of velocity are achieved in comparison with the GPS data alone. The method can be expanded to account for other available data sets, such as additional interferograms, lidar, or leveling data, in order to achieve even higher accuracy

    Coherency Matrix Decomposition-Based Polarimetric Persistent Scatterer Interferometry

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The rationale of polarimetric optimization techniques is to enhance the phase quality of the interferograms by combining adequately the different polarization channels available to produce an improved one. Different approaches have been proposed for polarimetric persistent scatterer interferometry (PolPSI). They range from the simple and computationally efficient BEST, where, for each pixel, the polarimetric channel with the best response in terms of phase quality is selected, to those with high-computational burden like the equal scattering mechanism (ESM) and the suboptimum scattering mechanism (SOM). BEST is fast and simple, but it does not fully exploit the potentials of polarimetry. On the other side, ESM explores all the space of solutions and finds the optimal one but with a very high-computational burden. A new PolPSI algorithm, named coherency matrix decomposition-based PolPSI (CMD-PolPSI), is proposed to achieve a compromise between phase optimization and computational cost. Its core idea is utilizing the polarimetric synthetic aperture radar (PolSAR) coherency matrix decomposition to determine the optimal polarization channel for each pixel. Three different PolSAR image sets of both full- (Barcelona) and dual-polarization (Murcia and Mexico City) are used to evaluate the performance of CMD-PolPSI. The results show that CMD-PolPSI presents better optimization results than the BEST method by using either DAD_{\mathrm{ A}} or temporal mean coherence as phase quality metrics. Compared with the ESM algorithm, CMD-PolPSI is 255 times faster but its performance is not optimal. The influence of the number of available polarization channels and pixel's resolutions on the CMD-PolPSI performance is also discussed.Peer ReviewedPostprint (author's final draft

    Post-failure evolution analysis of a rainfall-triggered landslide by multi-temporal interferometry SAR approaches integrated with geotechnical analysis

    Get PDF
    Persistent Scatterers Interferometry (PSI) represents one of the most powerful techniques for Earth's surface deformation processes' monitoring, especially for long-term evolution phenomena. In this work, a dataset of 34 TerraSAR-X StripMap images (October 2013–October 2014) has been processed by two PSI techniques - Coherent Pixel Technique-Temporal Sublook Coherence (CPT-TSC) and Small Baseline Subset (SBAS) - in order to study the evolution of a slow-moving landslide which occurred on February 23, 2012 in the Papanice hamlet (Crotone municipality, southern Italy) and induced by a significant rainfall event (185 mm in three days). The mass movement caused structural damage (buildings' collapse), and destruction of utility lines (gas, water and electricity) and roads. The results showed analogous displacement rates (30–40 mm/yr along the Line of Sight – LOS-of the satellite) with respect to the pre-failure phase (2008–2010) analyzed in previous works. Both approaches allowed detect the landslide-affected area, however the higher density of targets identified by means of CPT-TSC enabled to analyze in detail the slope behavior in order to design possible mitigation interventions. For this aim, a slope stability analysis has been carried out, considering the comparison between groundwater oscillations and time-series of displacement. Hence, the crucial role of the interaction between rainfall and groundwater level has been inferred for the landslide triggering. In conclusion, we showed that the integration of geotechnical and remote sensing approaches can be seen as the best practice to support stakeholders to design remedial works.Peer ReviewedPostprint (author's final draft

    System Concepts for Bi- and Multi-Static SAR Missions

    Get PDF
    The performance and capabilities of bi- and multistatic spaceborne synthetic aperture radar (SAR) are analyzed. Such systems can be optimized for a broad range of applications like frequent monitoring, wide swath imaging, single-pass cross-track interferometry, along-track interferometry, resolution enhancement or radar tomography. Further potentials arises from digital beamforming on receive, which allows to gather additional information about the direction of the scattered radar echoes. This directional information can be used to suppress interferences, to improve geometric and radiometric resolution, or to increase the unambiguous swath width. Furthermore, a coherent combination of multiple receiver signals will allow for a suppression of azimuth ambiguities. For this, a reconstruction algorithm is derived, which enables a recovery of the unambiguous Doppler spectrum also in case of non-optimum receiver aperture displacements leading to a non-uniform sampling of the SAR signal. This algorithm has also a great potential for systems relying on the displaced phase center (DPC) technique, like the high resolution wide swath (HRWS) SAR or the split antenna approach in the TerraSAR-X and Radarsat II satellites

    A Framework for SAR-Optical Stereogrammetry over Urban Areas

    Get PDF
    Currently, numerous remote sensing satellites provide a huge volume of diverse earth observation data. As these data show different features regarding resolution, accuracy, coverage, and spectral imaging ability, fusion techniques are required to integrate the different properties of each sensor and produce useful information. For example, synthetic aperture radar (SAR) data can be fused with optical imagery to produce 3D information using stereogrammetric methods. The main focus of this study is to investigate the possibility of applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical image pairs. For this purpose, the applicability of semi-global matching is investigated in this unconventional multi-sensor setting. To support the image matching by reducing the search space and accelerating the identification of correct, reliable matches, the possibility of establishing an epipolarity constraint for VHR SAR-optical image pairs is investigated as well. In addition, it is shown that the absolute geolocation accuracy of VHR optical imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be improved by a multi-sensor block adjustment formulation based on rational polynomial coefficients. Finally, the feasibility of generating point clouds with a median accuracy of about 2m is demonstrated and confirms the potential of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec

    Bayesian off-line detection of multiple change-points corrupted by multiplicative noise : application to SAR image edge detection

    Get PDF
    This paper addresses the problem of Bayesian off-line change-point detection in synthetic aperture radar images. The minimum mean square error and maximum a posteriori estimators of the changepoint positions are studied. Both estimators cannot be implemented because of optimization or integration problems. A practical implementation using Markov chain Monte Carlo methods is proposed. This implementation requires a priori knowledge of the so-called hyperparameters. A hyperparameter estimation procedure is proposed that alleviates the requirement of knowing the values of the hyperparameters. Simulation results on synthetic signals and synthetic aperture radar images are presented
    • 

    corecore