124 research outputs found

    Multi-focus Image Fusion with Sparse Feature Based Pulse Coupled Neural Network

    Get PDF
    In order to better extract the focused regions and effectively improve the quality of the fused image, a novel multi-focus image fusion scheme with sparse feature based pulse coupled neural network (PCNN) is proposed. The registered source images are decomposed into principal matrices and sparse matrices by robust principal component analysis (RPCA). The salient features of the sparse matrices construct the sparse feature space of the source images. The sparse features are used to motivate the PCNN neurons. The focused regions of the source images are detected by the output of the PCNN and integrated to construct the final fused image. Experimental results show that the proposed scheme works better in extracting the focused regions and improving the fusion quality compared to the other existing fusion methods in both spatial and transform domain

    Spectral feature fusion networks with dual attention for hyperspectral image classification

    Get PDF
    Recent progress in spectral classification is largely attributed to the use of convolutional neural networks (CNN). While a variety of successful architectures have been proposed, they all extract spectral features from various portions of adjacent spectral bands. In this paper, we take a different approach and develop a deep spectral feature fusion method, which extracts both local and interlocal spectral features, capturing thus also the correlations among non-adjacent bands. To our knowledge, this is the first reported deep spectral feature fusion method. Our model is a two-stream architecture, where an intergroup and a groupwise spectral classifiers operate in parallel. The interlocal spectral correlation feature extraction is achieved elegantly, by reshaping the input spectral vectors to form the socalled non-adjacent spectral matrices. We introduce the concept of groupwise band convolution to enable efficient extraction of discriminative local features with multiple kernels adopting to the local spectral content. Another important contribution of this work is a novel dual-channel attention mechanism to identify the most informative spectral features. The model is trained in an end-to-end fashion with a joint loss. Experimental results on real data sets demonstrate excellent performance compared to the current state-of-the-art

    Medical Diagnosis with Multimodal Image Fusion Techniques

    Get PDF
    Image Fusion is an effective approach utilized to draw out all the significant information from the source images, which supports experts in evaluation and quick decision making. Multi modal medical image fusion produces a composite fused image utilizing various sources to improve quality and extract complementary information. It is extremely challenging to gather every piece of information needed using just one imaging method. Therefore, images obtained from different modalities are fused Additional clinical information can be gleaned through the fusion of several types of medical image pairings. This study's main aim is to present a thorough review of medical image fusion techniques which also covers steps in fusion process, levels of fusion, various imaging modalities with their pros and cons, and  the major scientific difficulties encountered in the area of medical image fusion. This paper also summarizes the quality assessments fusion metrics. The various approaches used by image fusion algorithms that are presently available in the literature are classified into four broad categories i) Spatial fusion methods ii) Multiscale Decomposition based methods iii) Neural Network based methods and iv) Fuzzy Logic based methods. the benefits and pitfalls of the existing literature are explored and Future insights are suggested. Moreover, this study is anticipated to create a solid platform for the development of better fusion techniques in medical applications

    Deep Learning based 3D Segmentation: A Survey

    Full text link
    3D object segmentation is a fundamental and challenging problem in computer vision with applications in autonomous driving, robotics, augmented reality and medical image analysis. It has received significant attention from the computer vision, graphics and machine learning communities. Traditionally, 3D segmentation was performed with hand-crafted features and engineered methods which failed to achieve acceptable accuracy and could not generalize to large-scale data. Driven by their great success in 2D computer vision, deep learning techniques have recently become the tool of choice for 3D segmentation tasks as well. This has led to an influx of a large number of methods in the literature that have been evaluated on different benchmark datasets. This paper provides a comprehensive survey of recent progress in deep learning based 3D segmentation covering over 150 papers. It summarizes the most commonly used pipelines, discusses their highlights and shortcomings, and analyzes the competitive results of these segmentation methods. Based on the analysis, it also provides promising research directions for the future.Comment: Under review of ACM Computing Surveys, 36 pages, 10 tables, 9 figure

    Steganography Approach to Image Authentication Using Pulse Coupled Neural Network

    Get PDF
    This paper introduces a model for the authentication of large-scale images. The crucial element of the proposed model is the optimized Pulse Coupled Neural Network. This neural network generates position matrices based on which the embedding of authentication data into cover images is applied. Emphasis is placed on the minimalization of the stego image entropy change. Stego image entropy is consequently compared with the reference entropy of the cover image. The security of the suggested solution is granted by the neural network weights initialized with a steganographic key and by the encryption of accompanying steganographic data using the AES-256 algorithm. The integrity of the images is verified through the SHA-256 hash function. The integration of the accompanying and authentication data directly into the stego image and the authentication of the large images are the main contributions of the work

    Encoding Enhanced Complex CNN for Accurate and Highly Accelerated MRI

    Full text link
    Magnetic resonance imaging (MRI) using hyperpolarized noble gases provides a way to visualize the structure and function of human lung, but the long imaging time limits its broad research and clinical applications. Deep learning has demonstrated great potential for accelerating MRI by reconstructing images from undersampled data. However, most existing deep conventional neural networks (CNN) directly apply square convolution to k-space data without considering the inherent properties of k-space sampling, limiting k-space learning efficiency and image reconstruction quality. In this work, we propose an encoding enhanced (EN2) complex CNN for highly undersampled pulmonary MRI reconstruction. EN2 employs convolution along either the frequency or phase-encoding direction, resembling the mechanisms of k-space sampling, to maximize the utilization of the encoding correlation and integrity within a row or column of k-space. We also employ complex convolution to learn rich representations from the complex k-space data. In addition, we develop a feature-strengthened modularized unit to further boost the reconstruction performance. Experiments demonstrate that our approach can accurately reconstruct hyperpolarized 129Xe and 1H lung MRI from 6-fold undersampled k-space data and provide lung function measurements with minimal biases compared with fully-sampled image. These results demonstrate the effectiveness of the proposed algorithmic components and indicate that the proposed approach could be used for accelerated pulmonary MRI in research and clinical lung disease patient care
    corecore