152,254 research outputs found

    A CNN-based fusion method for feature extraction from sentinel data

    Get PDF
    Sensitivity to weather conditions, and specially to clouds, is a severe limiting factor to the use of optical remote sensing for Earth monitoring applications. A possible alternative is to benefit from weather-insensitive synthetic aperture radar (SAR) images. In many real-world applications, critical decisions are made based on some informative optical or radar features related to items such as water, vegetation or soil. Under cloudy conditions, however, optical-based features are not available, and they are commonly reconstructed through linear interpolation between data available at temporally-close time instants. In this work, we propose to estimate missing optical features through data fusion and deep-learning. Several sources of information are taken into account—optical sequences, SAR sequences, digital elevation model—so as to exploit both temporal and cross-sensor dependencies. Based on these data and a tiny cloud-free fraction of the target image, a compact convolutional neural network (CNN) is trained to perform the desired estimation. To validate the proposed approach, we focus on the estimation of the normalized difference vegetation index (NDVI), using coupled Sentinel-1 and Sentinel-2 time-series acquired over an agricultural region of Burkina Faso from May–November 2016. Several fusion schemes are considered, causal and non-causal, single-sensor or joint-sensor, corresponding to different operating conditions. Experimental results are very promising, showing a significant gain over baseline methods according to all performance indicators

    Multi-modal Medical Neurological Image Fusion using Wavelet Pooled Edge Preserving Autoencoder

    Full text link
    Medical image fusion integrates the complementary diagnostic information of the source image modalities for improved visualization and analysis of underlying anomalies. Recently, deep learning-based models have excelled the conventional fusion methods by executing feature extraction, feature selection, and feature fusion tasks, simultaneously. However, most of the existing convolutional neural network (CNN) architectures use conventional pooling or strided convolutional strategies to downsample the feature maps. It causes the blurring or loss of important diagnostic information and edge details available in the source images and dilutes the efficacy of the feature extraction process. Therefore, this paper presents an end-to-end unsupervised fusion model for multimodal medical images based on an edge-preserving dense autoencoder network. In the proposed model, feature extraction is improved by using wavelet decomposition-based attention pooling of feature maps. This helps in preserving the fine edge detail information present in both the source images and enhances the visual perception of fused images. Further, the proposed model is trained on a variety of medical image pairs which helps in capturing the intensity distributions of the source images and preserves the diagnostic information effectively. Substantial experiments are conducted which demonstrate that the proposed method provides improved visual and quantitative results as compared to the other state-of-the-art fusion methods.Comment: 8 pages, 5 figures, 6 table

    A Survey of Iris Recognition System

    Get PDF
    The uniqueness of iris texture makes it one of the reliable physiological biometric traits compare to the other biometric traits. In this paper, we investigate a different level of fusion approach in iris image. Although, a number of iris recognition methods has been proposed in recent years, however most of them focus on the feature extraction and classification method. Less number of method focuses on the information fusion of iris images. Fusion is believed to produce a better discrimination power in the feature space, thus we conduct an analysis to investigate which fusion level is able to produce the best result for iris recognition system. Experimental analysis using CASIA dataset shows feature level fusion produce 99% recognition accuracy. The verification analysis shows the best result is GAR = 95% at the FRR = 0.1
    • …
    corecore