14 research outputs found

    BiTCAN: An emotion recognition network based on saliency in brain cognition

    Get PDF
    In recent years, with the continuous development of artificial intelligence and brain-computer interfaces, emotion recognition based on electroencephalogram (EEG) signals has become a prosperous research direction. Due to saliency in brain cognition, we construct a new spatio-temporal convolutional attention network for emotion recognition named BiTCAN. First, in the proposed method, the original EEG signals are de-baselined, and the two-dimensional mapping matrix sequence of EEG signals is constructed by combining the electrode position. Second, on the basis of the two-dimensional mapping matrix sequence, the features of saliency in brain cognition are extracted by using the Bi-hemisphere discrepancy module, and the spatio-temporal features of EEG signals are captured by using the 3-D convolution module. Finally, the saliency features and spatio-temporal features are fused into the attention module to further obtain the internal spatial relationships between brain regions, and which are input into the classifier for emotion recognition. Many experiments on DEAP and SEED (two public datasets) show that the accuracies of the proposed algorithm on both are higher than 97%, which is superior to most existing emotion recognition algorithms

    A Boosting SAR Image Despeckling Method Based on Non-Local Weighted Group Low-Rank Representation

    No full text
    In this paper, we propose a boosting synthetic aperture radar (SAR) image despeckling method based on non-local weighted group low-rank representation (WGLRR). The spatial structure information of SAR images leads to the similarity of the patches. Furthermore, the data matrix grouped by the similar patches within the noise-free SAR image is often low-rank. Based on this, we use low-rank representation (LRR) to recover the noise-free group data matrix. To maintain the fidelity of the recovered image, we integrate the corrupted probability of each pixel into the group LRR model as a weight to constrain the fidelity of recovered noise-free patches. Each single patch might belong to several groups, so different estimations of each patch are aggregated with a weighted averaging procedure. The residual image contains signal leftovers due to the imperfect denoising, so we strengthen the signal by leveraging on the availability of the denoised image to suppress noise further. Experimental results on simulated and actual SAR images show the superior performance of the proposed method in terms of objective indicators and of perceived image quality

    SAR Image De-Noising Based on Shift Invariant K-SVD and Guided Filter

    No full text
    Finding a way to effectively suppress speckle in SAR images has great significance. K-means singular value decomposition (K-SVD) has shown great potential in SAR image de-noising. However, the traditional K-SVD is sensitive to the position and phase of the characteristics in the image, and the de-noised image by K-SVD has lost some detailed information of the original image. In this paper, we present one new SAR image de-noising method based on shift invariant K-SVD and guided filter. The whole method consists of two steps. The first deals mainly with the noisy image with shift invariant K-SVD and obtaining the initial de-noised image. In the second step, we do the guided filtering for the initial de-noised image. Finally, we can recover the final de-noised image. Experimental results show that our method not only has better visual effects and objective evaluation, but can also save more detailed information such as image edge and texture when de-noising SAR images. The presented shift invariant K-SVD can be widely used in image processing, such as image fusion, edge detection and super-resolution reconstruction

    Multi-Focus Image Fusion Based on Multi-Scale Generative Adversarial Network

    No full text
    The methods based on the convolutional neural network have demonstrated its powerful information integration ability in image fusion. However, most of the existing methods based on neural networks are only applied to a part of the fusion process. In this paper, an end-to-end multi-focus image fusion method based on a multi-scale generative adversarial network (MsGAN) is proposed that makes full use of image features by a combination of multi-scale decomposition with a convolutional neural network. Extensive qualitative and quantitative experiments on the synthetic and Lytro datasets demonstrated the effectiveness and superiority of the proposed MsGAN compared to the state-of-the-art multi-focus image fusion methods

    Remote Sensing Image Fusion Based on Sparse Representation and Guided Filtering

    No full text
    In this paper, a remote sensing image fusion method is presented since sparse representation (SR) has been widely used in image processing, especially for image fusion. Firstly, we used source images to learn the adaptive dictionary, and sparse coefficients were obtained by sparsely coding the source images with the adaptive dictionary. Then, with the help of improved hyperbolic tangent function (tanh) and l 0 − max , we fused these sparse coefficients together. The initial fused image can be obtained by the image fusion method based on SR. To take full advantage of the spatial information of the source images, the fused image based on the spatial domain (SF) was obtained at the same time. Lastly, the final fused image could be reconstructed by guided filtering of the fused image based on SR and SF. Experimental results show that the proposed method outperforms some state-of-the-art methods on visual and quantitative evaluations

    Multi-focus image fusion based on block matching in 3D transform domain

    No full text

    A Collaborative Despeckling Method for SAR Images Based on Texture Classification

    No full text
    Speckle is an unavoidable noise-like phenomenon in Synthetic Aperture Radar (SAR) imaging. In order to remove speckle, many despeckling methods have been proposed during the past three decades, including spatial-based methods, transform domain-based methods, and non-local filtering methods. However, SAR images usually contain many different types of regions, including homogeneous and heterogeneous regions. Some filters could despeckle effectively in homogeneous regions but could not preserve structures in heterogeneous regions. Some filters preserve structures well but do not suppress speckle effectively. Following this theory, we design a combination of two state-of-the-art despeckling tools that can overcome their respective shortcomings. In order to select the best filter output for each area in the image, the clustering and Gray Level Co-Occurrence Matrices (GLCM) are used for image classification and weighting, respectively. Clustering and GLCM use the co-registered optical images of SAR images because their structure information is consistent, and the optical images are much cleaner than SAR images. The experimental results on synthetic and real-world SAR images show that our proposed method can provide a better objective performance index under a strong noise level. Subjective visual inspection demonstrates that the proposed method has great potential in preserving structural details and suppressing speckle noise

    A SAR Image-Despeckling Method Based on HOSVD Using Tensor Patches

    No full text
    Coherent imaging systems, such as synthetic aperture radar (SAR), often suffer from granular speckle noise due to inherent defects, which can make interpretation challenging. Although numerous despeckling methods have been proposed in the past three decades, SAR image despeckling remains a challenging task. With the extensive use of non-local self-similarity, despeckling methods under the non-local framework have become increasingly mature. However, effectively utilizing patch similarities remains a key problem in SAR image despeckling. This paper proposes a three-dimensional (3D) SAR image despeckling method based on searching for similar patches and applying the high-order singular value decomposition (HOSVD) theory to better utilize the high-dimensional information of similar patches. Specifically, the proposed method extends two-dimensional (2D) to 3D for SAR image despeckling using tensor patches. A new, non-local similar patch-searching measure criterion is used to classify the patches, and similar patches are stacked into 3D tensors. Lastly, the iterative adaptive weighted tensor cyclic approximation is used for SAR image despeckling based on the HOSVD method. Experimental results demonstrate that the proposed method not only effectively reduces speckle noise but also preserves fine details
    corecore