10,178 research outputs found

    Image fusion techniqes for remote sensing applications

    Get PDF
    Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the Synthetic Aperture Radar (SAR) Interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter. Each study case presents also results achieved by the proposed techniques applied to real data

    A robust nonlinear scale space change detection approach for SAR images

    Get PDF
    In this paper, we propose a change detection approach based on nonlinear scale space analysis of change images for robust detection of various changes incurred by natural phenomena and/or human activities in Synthetic Aperture Radar (SAR) images using Maximally Stable Extremal Regions (MSERs). To achieve this, a variant of the log-ratio image of multitemporal images is calculated which is followed by Feature Preserving Despeckling (FPD) to generate nonlinear scale space images exhibiting different trade-offs in terms of speckle reduction and shape detail preservation. MSERs of each scale space image are found and then combined through a decision level fusion strategy, namely "selective scale fusion" (SSF), where contrast and boundary curvature of each MSER are considered. The performance of the proposed method is evaluated using real multitemporal high resolution TerraSAR-X images and synthetically generated multitemporal images composed of shapes with several orientations, sizes, and backscatter amplitude levels representing a variety of possible signatures of change. One of the main outcomes of this approach is that different objects having different sizes and levels of contrast with their surroundings appear as stable regions at different scale space images thus the fusion of results from scale space images yields a good overall performance

    Combining multiple resolutions into hierarchical representations for kernel-based image classification

    Get PDF
    Geographic object-based image analysis (GEOBIA) framework has gained increasing interest recently. Following this popular paradigm, we propose a novel multiscale classification approach operating on a hierarchical image representation built from two images at different resolutions. They capture the same scene with different sensors and are naturally fused together through the hierarchical representation, where coarser levels are built from a Low Spatial Resolution (LSR) or Medium Spatial Resolution (MSR) image while finer levels are generated from a High Spatial Resolution (HSR) or Very High Spatial Resolution (VHSR) image. Such a representation allows one to benefit from the context information thanks to the coarser levels, and subregions spatial arrangement information thanks to the finer levels. Two dedicated structured kernels are then used to perform machine learning directly on the constructed hierarchical representation. This strategy overcomes the limits of conventional GEOBIA classification procedures that can handle only one or very few pre-selected scales. Experiments run on an urban classification task show that the proposed approach can highly improve the classification accuracy w.r.t. conventional approaches working on a single scale.Comment: International Conference on Geographic Object-Based Image Analysis (GEOBIA 2016), University of Twente in Enschede, The Netherland

    Visual Saliency Based on Multiscale Deep Features

    Get PDF
    Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this CVPR 2015 paper, we discover that a high-quality visual saliency model can be trained with multiscale features extracted using a popular deep learning architecture, convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for extracting features at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotation. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0% and 13.2% respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7% and 35.1% respectively on these two datasets.Comment: To appear in CVPR 201

    Multi-Sensor Image Fusion Based on Moment Calculation

    Full text link
    An image fusion method based on salient features is proposed in this paper. In this work, we have concentrated on salient features of the image for fusion in order to preserve all relevant information contained in the input images and tried to enhance the contrast in fused image and also suppressed noise to a maximum extent. In our system, first we have applied a mask on two input images in order to conserve the high frequency information along with some low frequency information and stifle noise to a maximum extent. Thereafter, for identification of salience features from sources images, a local moment is computed in the neighborhood of a coefficient. Finally, a decision map is generated based on local moment in order to get the fused image. To verify our proposed algorithm, we have tested it on 120 sensor image pairs collected from Manchester University UK database. The experimental results show that the proposed method can provide superior fused image in terms of several quantitative fusion evaluation index.Comment: 5 pages, International Conferenc
    • 

    corecore