669 research outputs found

    Image fusion techniqes for remote sensing applications

    Get PDF
    Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the Synthetic Aperture Radar (SAR) Interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter. Each study case presents also results achieved by the proposed techniques applied to real data

    Multitemporal Very High Resolution from Space: Outcome of the 2016 IEEE GRSS Data Fusion Contest

    Get PDF
    In this paper, the scientific outcomes of the 2016 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society are discussed. The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset, which included a temporal pair of very high resolution panchromatic and multispectral Deimos-2 images and a video captured by the Iris camera on-board the International Space Station. The problems addressed and the techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and mixed ideas and methodologies from the remote sensing, video processing, and computer vision. In particular, the winning team developed a deep learning method to jointly address spatial scene labeling and temporal activity modeling using the available image and video data. The second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection. The methodological key ideas of both these approaches and the main results of the corresponding experimental validation are discussed in this paper

    Accurate and automatic NOAA-AVHRR image navigation using a global contour matching approach

    Get PDF
    The problem of precise and automatic AVHRR image navigation is tractable in theory, but has proved to be somewhat difficult in practice. The authors' work has been motivated by the need for a fully automatic and operational navigation system capable of geo-referencing NOAA-AVHRR images with high accuracy and without operator supervision. The proposed method is based on the simultaneous use of an orbital model and a contour matching approach. This last process, relying on an affine transformation model, is used to correct the errors caused by inaccuracies in orbit modeling, nonzero value for the spacecraft's roll, pitch and yaw, errors due to inaccuracies in the satellite positioning and failures in the satellite internal clock. The automatic global contour matching process is summarized as follows: i) Estimation of the gradient energy map (edges) in the sensed image and detection of the cloudless (reliable) areas in this map. ii) Initialization of the affine model parameters by minimizing the Euclidean distance between the reference and sensed images objects. iii) Simultaneous optimization of all reference image contours on the sensed image by energy minimization in the domain of the global transformation parameters. The process is iterated in a hierarchical way, reducing the parameter searching space at each iteration. The proposed image navigation algorithm has proved to be capable of geo-referencing a satellite image within 1 pixel.Peer ReviewedPostprint (published version

    Machine Learning and Pattern Recognition Methods for Remote Sensing Image Registration and Fusion

    Get PDF
    In the last decade, the remote sensing world has dramatically evolved. New types of sensor, each one collecting data with possibly different modalities, have been designed, developed, and deployed. Moreover, new missions have been planned and launched, aimed not only at collecting data of the Earth's surface, but also at acquiring planetary data in support of the study of the whole Solar system. Indeed, such a variety of technologies highlights the need for automatic methods able to effectively exploit all the available information. In the last years, lot of effort has been put in the design and development of advanced data fusion methods able to extract and make use of all the information available from as many complementary information sources as possible. Indeed, the goal of this thesis is to present novel machine learning and pattern recognition methodologies designed to support the exploitation of diverse sources of information, such as multisensor, multimodal, or multiresolution imagery. In this context, image registration plays a major role as is allows bringing two or more digital images into precise alignment for analysis and comparison. Here, image registration is tackled using both feature-based and area-based strategies. In the former case, the features of interest are extracted using a stochastic geometry model based on marked point processes, while, in the latter case, information theoretic functionals and the domain adaptation capabilities of generative adversarial networks are exploited. In addition, multisensor image registration is also applied in a large scale scenario by introducing a tiling-based strategy aimed at minimizing the computational burden, which is usually heavy in the multisensor case due to the need for information theoretic similarity measures. Moreover, automatic change detection with multiresolution and multimodality imagery is addressed via a novel Markovian framework based on a linear mixture model and on an ad-hoc multimodal energy function minimized using graph cuts or belied propagation methods. The statistics of the data at the various spatial scales is modelled through appropriate generalized Gaussian distributions and by iteratively estimating a set of virtual images, at the finest resolution, representing the data that would have been collected in case all the sensors worked at that resolution. All such methodologies have been experimentally evaluated with respect to different datasets, and with particular focus on the trade-off between the achievable performances and the demands in terms of computational resources. Moreover, such methods are also compared with state-of-the-art solutions, and are analyzed in terms of future developments, giving insights to possible future lines of research in this field

    Evaluation of space SAR as a land-cover classification

    Get PDF
    The multidimensional approach to the mapping of land cover, crops, and forests is reported. Dimensionality is achieved by using data from sensors such as LANDSAT to augment Seasat and Shuttle Image Radar (SIR) data, using different image features such as tone and texture, and acquiring multidate data. Seasat, Shuttle Imaging Radar (SIR-A), and LANDSAT data are used both individually and in combination to map land cover in Oklahoma. The results indicates that radar is the best single sensor (72% accuracy) and produces the best sensor combination (97.5% accuracy) for discriminating among five land cover categories. Multidate Seasat data and a single data of LANDSAT coverage are then used in a crop classification study of western Kansas. The highest accuracy for a single channel is achieved using a Seasat scene, which produces a classification accuracy of 67%. Classification accuracy increases to approximately 75% when either a multidate Seasat combination or LANDSAT data in a multisensor combination is used. The tonal and textural elements of SIR-A data are then used both alone and in combination to classify forests into five categories

    Satellite image georegistration from coast-line codification

    Get PDF
    This paper presents a contour-based approach for automatic image registration in satellite oceanography. Accurate image georegistration is an essential step to increase the eff ectiveness of all the image processing methods that aggregate information from diff erent sources, i.e. applying data fusion techniques. In our approach the images description is based on main contours extracted from coast-line. Each contour is codifi ed by a modifi ed chain-code, and the result is a discrete value sequence. The classical registration techniques were area-based, and the registration was done in a 2D domain (spatial and/or transformed); this approach is feature-based, and the registration is done in a 1D domain (discrete sequences). This new technique improves the registration results. It allows the registration of multimodal images, and the registration when there are occlusions and gaps in the images (i.e. due to clouds), or the registration on images with moderate perspective changes. Finally, it has to be pointed out that the proposed contour-matching technique assumes that a reference image, containing the coastlines of the input image geographical area, is available

    Satellite image georegistration from coast-line codification

    Get PDF
    Martech 2007 International Workshop on Marine Technology, 15-16 november 2007, Vilanova i la GeltrĂş, Spain.-- 2 pages, 3 figuresThis paper presents a contour-based approach for automatic image registration in satellite oceanography. Accurate image georegistration is an essential step to increase the eff ectiveness of all the image processing methods that aggregate information from diff erent sources, i.e. applying data fusion techniques. In our approach the images description is based on main contours extracted from coast-line. Each contour is codifi ed by a modifi ed chain-code, and the result is a discrete value sequence. The classical registration techniques were area-based, and the registration was done in a 2D domain (spatial and/or transformed); this approach is feature-based, and the registration is done in a 1D domain (discrete sequences). This new technique improves the registration results. It allows the registration of multimodal images, and the registration when there are occlusions and gaps in the images (i.e. due to clouds), or the registration on images with moderate perspective changes. Finally, it has to be pointed out that the proposed contour-matching technique assumes that a reference image, containing the coastlines of the input image geographical area, is availablePeer reviewe

    Oil spill detection using optical sensors: a multi-temporal approach

    Get PDF
    Oil pollution is one of the most destructive consequences due to human activities in the marine environment. Oil wastes come from many sources and take decades to be disposed of. Satellite based remote sensing systems can be implemented into a surveillance and monitoring network. In this study, a multi-temporal approach to the oil spill detection problem is investigated. Change Detection (CD) analysis was applied to MODIS/Terra and Aqua and OLI/Landsat 8 images of several reported oil spill events, characterized by different geographic location, sea conditions, source and extension of the spill. Toward the development of an automatic detection algorithm, a Change Vector Analysis (CVA) technique was implemented to carry out the comparison between the current image of the area of interest and a dataset of reference image, statistically analyzed to reduce the sea spectral variability between different dates. The proposed approach highlights the optical sensors’ capabilities in detecting oil spills at sea. The effectiveness of different sensors’ resolution towards the detection of spills of different size, and the relevance of the sensors’ revisiting time to track and monitor the evolution of the event is also investigated
    • …
    corecore