19 research outputs found

    LP-BFGS attack: An adversarial attack based on the Hessian with limited pixels

    Full text link
    Deep neural networks are vulnerable to adversarial attacks. Most white-box attacks are based on the gradient of models to the input. Since the computation and memory budget, adversarial attacks based on the Hessian information are not paid enough attention. In this work, we study the attack performance and computation cost of the attack method based on the Hessian with a limited perturbation pixel number. Specifically, we propose the Limited Pixel BFGS (LP-BFGS) attack method by incorporating the BFGS algorithm. Some pixels are selected as perturbation pixels by the Integrated Gradient algorithm, which are regarded as optimization variables of the LP-BFGS attack. Experimental results across different networks and datasets with various perturbation pixel numbers demonstrate our approach has a comparable attack with an acceptable computation compared with existing solutions.Comment: 5 pages, 4 figure

    EDAfuse: A encoderā€“decoder with atrous spatial pyramid network for infrared and visible image fusion

    No full text
    Abstract Infrared and visible images come from different sensors, and they have their advantages and disadvantages. In order to make the fused images contain as much salience information as possible, a practical fusion method, termed EDAfuse, is proposed in this paper. In EDAfuse, the authors introduce an encoderā€“decoder with the atrous spatial pyramid network for infrared and visible image fusion. The authors use the encoding network which includes three convolutional neural network (CNN) layers to extract deep features from input images. Then the proposed atrous spatial pyramid model is utilized to get five different scale features. The same scale features from the two original images are fused by our fusion strategy with the attention model and information quantity model. Finally, the decoding network is utilized to reconstruct the fused image. In the training process, the authors introduce a loss function with saliency loss to improve the ability of the model for extracting salient features from original images. In the experiment process, the authors use the average values of seven metrics for 21 fused images to evaluate the proposed method and the other seven existing methods. The results show that our method has four best values and three secondā€best values. The subjective assessment also demonstrates that the proposed method outperforms the stateā€ofā€theā€art fusion methods

    Medical image fusion based on variational and nonlinear structure tensor

    No full text
    Medical image fusion plays an important role in detection and treatment of disease. Although numerous medical image fusion methods have been proposed, most of them decrease the contrast and lose the image information. In this paper, a novel MRI and CT image fusion method is proposed combining rolling guidance filter, structure tensor, and nonsubsampled shearlet transform (NSST). First, the rolling guidance filter and the sum-modified laplacian (SML) operator are introduced in the algorithm to construct the weight maps in non-linear domain, then the fused gradient is firstly obtained by a new weighted structure tensor fusion method, and the fused image is firstly acquired in NSST domain, finally, a new energy functional is defined to constrain the gradient and pixel information of the final fused image close to the pre-fused gradient and the pre-fused image, experimental results show that the proposed method can retain the edge information of source images effectively and preserve the reduction of contrast

    Medical image fusion based on variational and nonlinear structure tensor

    No full text
    Medical image fusion plays an important role in detection and treatment of disease. Although numerous medical image fusion methods have been proposed, most of them decrease the contrast and lose the image information. In this paper, a novel MRI and CT image fusion method is proposed combining rolling guidance filter, structure tensor, and nonsubsampled shearlet transform (NSST). First, the rolling guidance filter and the sum-modified laplacian (SML) operator are introduced in the algorithm to construct the weight maps in non-linear domain, then the fused gradient is firstly obtained by a new weighted structure tensor fusion method, and the fused image is firstly acquired in NSST domain, finally, a new energy functional is defined to constrain the gradient and pixel information of the final fused image close to the pre-fused gradient and the pre-fused image, experimental results show that the proposed method can retain the edge information of source images effectively and preserve the reduction of contrast

    Infrared and Visible Image Fusion Combining Interesting Region Detection and Nonsubsampled Contourlet Transform

    No full text
    The most fundamental purpose of infrared (IR) and visible (VI) image fusion is to integrate the useful information and produce a new image which has higher reliability and understandability for human or computer vision. In order to better preserve the interesting region and its corresponding detail information, a novel multiscale fusion scheme based on interesting region detection is proposed in this paper. Firstly, the MeanShift is used to detect the interesting region with the salient objects and the background region of IR and VI. Then the interesting regions are processed by the guided filter. Next, the nonsubsampled contourlet transform (NSCT) is used for background region decomposition of IR and VI to get a low-frequency and a series of high-frequency layers. An improved weighted average method based on per-pixel weighted average is used to fuse the low-frequency layer. The pulse-coupled neural network (PCNN) is used to fuse each high-frequency layer. Finally, the fused image is obtained by fusing the fused interesting region and the fused background region. Experimental results demonstrate that the proposed algorithm can integrate more background details as well as highlight the interesting region with the salient objects, which is superior to the conventional methods in objective quality evaluations and visual inspection

    Predictions of Apoptosis Proteins by Integrating Different Features Based on Improving Pseudo-Position-Specific Scoring Matrix

    No full text
    Apoptosis proteins are strongly related to many diseases and play an indispensable role in maintaining the dynamic balance between cell death and division in vivo. Obtaining localization information on apoptosis proteins is necessary in understanding their function. To date, few researchers have focused on the problem of apoptosis data imbalance before classification, while this data imbalance is prone to misclassification. Therefore, in this work, we introduce a method to resolve this problem and to enhance prediction accuracy. Firstly, the features of the protein sequence are captured by combining Improving Pseudo-Position-Specific Scoring Matrix (IM-Psepssm) with the Bidirectional Correlation Coefficient (Bid-CC) algorithm from position-specific scoring matrix. Secondly, different features of fusion and resampling strategies are used to reduce the impact of imbalance on apoptosis protein datasets. Finally, the eigenvector adopts the Support Vector Machine (SVM) to the training classification model, and the prediction accuracy is evaluated by jackknife cross-validation tests. The experimental results indicate that, under the same feature vector, adopting resampling methods remarkably boosts many significant indicators in the unsampling method for predicting the localization of apoptosis proteins in the ZD98, ZW225, and CL317 databases. Additionally, we also present new user-friendly local software for readers to apply; the codes and software can be freely accessed at https://github.com/ruanxiaoli/Im-Psepssm

    MCNN: Conditional focus probability learning to multiā€focus image fusion via mutually coupled neural network

    No full text
    Abstract In this paper, a novel conditional focus probability learning model, termed MCNN, is proposed for multiā€focus image fusion (MFIF). Given a pair of source images, their conditional focus probabilities can be generated by using the wellā€trained MCNN, which is further converted into the binary focus masks to directly produce an allā€focus image with no postprocessing. To this end, a fully convolutional encoder is designed with two mutually coupled Siamese branches in MCNN, which include a coupling block that bridge between the two branches to provide conditional information to each other, at different layers, such that the encoder can more strongly extract conditional focus features and further encourage the decoder pixelā€wisely to give more robust conditional focus probabilities. Moreover, a hybrid loss is designed with a structural sparse fidelity loss and a structural similarity loss to force the network to learn more accurate conditional focus probabilities. Particularly, a convolutional norm with good structural group sparse is proposed, to construct the structural sparse fidelity loss. Simulation results substantiate the superiority of our MCNN over other stateā€ofā€theā€art, in terms of both visual perception and quantitative evaluation

    DPAFNet: A Multistage Dense-Parallel Attention Fusion Network for Pansharpening

    No full text
    Pansharpening is the technology to fuse a low spatial resolution MS image with its associated high spatial full resolution PAN image. However, primary methods have the insufficiency of the feature expression and do not explore both the intrinsic features of the images and correlation between images, which may lead to limited integration of valuable information in the pansharpening results. To this end, we propose a novel multistage Dense-Parallel attention fusion network (DPAFNet). The proposed parallel attention residual dense block (PARDB) module can focus on the intrinsic features of MS images and PAN images while exploring the correlation between the source images. To fuse more complementary information as much as possible, the features extracted from each PARDB are fused at multistage levels, which allows the network to better focus on and exploit different information. Additionally, we propose a new loss, where it calculates the L2-norm between the pansharpening results and PAN images to constrain the spatial structures. Experiments were conducted on simulated and real datasets and the evaluation results verified the superiority of the DPAFNet

    Brain Medical Image Fusion Based on Dual-Branch CNNs in NSST Domain

    No full text
    Computed tomography (CT) images show structural features, while magnetic resonance imaging (MRI) images represent brain tissue anatomy but do not contain any functional information. How to effectively combine the images of the two modes has become a research challenge. In this paper, a new framework for medical image fusion is proposed which combines convolutional neural networks (CNNs) and non-subsampled shearlet transform (NSST) to simultaneously cover the advantages of them both. This method effectively retains the functional information of the CT image and reduces the loss of brain structure information and spatial distortion of the MRI image. In our fusion framework, the initial weights integrate the pixel activity information from two source images that is generated by a dual-branch convolutional network and is decomposed by NSST. Firstly, the NSST is performed on the source images and the initial weights to obtain their low-frequency and high-frequency coefficients. Then, the first component of the low-frequency coefficients is fused by a novel fusion strategy, which simultaneously copes with two key issues in the fusion processing which are named energy conservation and detail extraction. The second component of the low-frequency coefficients is fused by the strategy that is designed according to the spatial frequency of the weight map. Moreover, the high-frequency coefficients are fused by the high-frequency components of the initial weight. Finally, the final image is reconstructed by the inverse NSST. The effectiveness of the proposed method is verified using pairs of multimodality images, and the sufficient experiments indicate that our method performs well especially for medical image fusion
    corecore