3,974 research outputs found

    Efficient Image Processing Via Compressive Sensing Of Integrate-And-Fire Neuronal Network Dynamics

    Get PDF
    Integrate-and-fire (I&F) neuronal networks are ubiquitous in diverse image processing applications, including image segmentation and visual perception. While conventional I&F network image processing requires the number of nodes composing the network to be equal to the number of image pixels driving the network, we determine whether I&F dynamics can accurately transmit image information when there are significantly fewer nodes than network input-signal components. Although compressive sensing (CS) theory facilitates the recovery of images using very few samples through linear signal processing, it does not address whether similar signal recovery techniques facilitate reconstructions through measurement of the nonlinear dynamics of an I&F network. In this paper, we present a new framework for recovering sparse inputs of nonlinear neuronal networks via compressive sensing. By recovering both one-dimensional inputs and two-dimensional images, resembling natural stimuli, we demonstrate that input information can be well-preserved through nonlinear I&F network dynamics even when the number of network-output measurements is significantly smaller than the number of input-signal components. This work suggests an important extension of CS theory potentially useful in improving the processing of medical or natural images through I&F network dynamics and understanding the transmission of stimulus information across the visual system

    Medical Image Segmentation Based on Multi-Modal Convolutional Neural Network: Study on Image Fusion Schemes

    Full text link
    Image analysis using more than one modality (i.e. multi-modal) has been increasingly applied in the field of biomedical imaging. One of the challenges in performing the multimodal analysis is that there exist multiple schemes for fusing the information from different modalities, where such schemes are application-dependent and lack a unified framework to guide their designs. In this work we firstly propose a conceptual architecture for the image fusion schemes in supervised biomedical image analysis: fusing at the feature level, fusing at the classifier level, and fusing at the decision-making level. Further, motivated by the recent success in applying deep learning for natural image analysis, we implement the three image fusion schemes above based on the Convolutional Neural Network (CNN) with varied structures, and combined into a single framework. The proposed image segmentation framework is capable of analyzing the multi-modality images using different fusing schemes simultaneously. The framework is applied to detect the presence of soft tissue sarcoma from the combination of Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET) images. It is found from the results that while all the fusion schemes outperform the single-modality schemes, fusing at the feature level can generally achieve the best performance in terms of both accuracy and computational cost, but also suffers from the decreased robustness in the presence of large errors in any image modalities.Comment: Zhe Guo and Xiang Li contribute equally to this wor

    Structural Similarity based Anatomical and Functional Brain Imaging Fusion

    Full text link
    Multimodal medical image fusion helps in combining contrasting features from two or more input imaging modalities to represent fused information in a single image. One of the pivotal clinical applications of medical image fusion is the merging of anatomical and functional modalities for fast diagnosis of malignant tissues. In this paper, we present a novel end-to-end unsupervised learning-based Convolutional Neural Network (CNN) for fusing the high and low frequency components of MRI-PET grayscale image pairs, publicly available at ADNI, by exploiting Structural Similarity Index (SSIM) as the loss function during training. We then apply color coding for the visualization of the fused image by quantifying the contribution of each input image in terms of the partial derivatives of the fused image. We find that our fusion and visualization approach results in better visual perception of the fused image, while also comparing favorably to previous methods when applying various quantitative assessment metrics.Comment: Accepted at MICCAI-MBIA 201

    Image Fusion Based on Shearlets

    Get PDF

    Image retrieval based on colour and improved NMI texture features

    Get PDF
    This paper proposes an improved method for extracting NMI features. This method uses Particle Swarm Optimization in advance to optimize the two-dimensional maximum class-to-class variance (2OTSU) in advance. Afterwards, the optimized 2OUSU is introduced into the Pulse Coupled Neural Network (PCNN) to automatically obtain the number of iterations of the loop. We use an improved PCNN method to extract the NMI features of the image. For the problem of low accuracy of single feature, this paper proposes a new method of multi-feature fusion based on image retrieval. It uses HSV colour features and texture features, where, the texture feature extraction methods include: Grey Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP) and Improved PCNN. The experimental results show that: on the Corel-1k dataset, compared with similar algorithms, the retrieval accuracy of this method is improved by 13.6%; On the AT&T dataset, the retrieval accuracy is improved by 13.4% compared with the similar algorithm; on the FD-XJ dataset, the retrieval accuracy is improved by 17.7% compared with the similar algorithm. Therefore, the proposed algorithm has better retrieval performance and robustness compared with the existing image retrieval algorithms based on multi-feature fusion

    Infrared and Visible Image Fusion using a Deep Learning Framework

    Full text link
    In recent years, deep learning has become a very active research tool which is used in many image processing fields. In this paper, we propose an effective image fusion method using a deep learning framework to generate a single image which contains all the features from infrared and visible images. First, the source images are decomposed into base parts and detail content. Then the base parts are fused by weighted-averaging. For the detail content, we use a deep learning network to extract multi-layer features. Using these features, we use l_1-norm and weighted-average strategy to generate several candidates of the fused detail content. Once we get these candidates, the max selection strategy is used to get final fused detail content. Finally, the fused image will be reconstructed by combining the fused base part and detail content. The experimental results demonstrate that our proposed method achieves state-of-the-art performance in both objective assessment and visual quality. The Code of our fusion method is available at https://github.com/hli1221/imagefusion_deeplearningComment: 6 pages, 6 figures, 2 tables, ICPR 2018(accepted
    corecore