192 research outputs found

    Signed Laplacian Deep Learning with Adversarial Augmentation for Improved Mammography Diagnosis

    Get PDF
    Computer-aided breast cancer diagnosis in mammography is limited by inadequate data and the similarity between benign and cancerous masses. To address this, we propose a signed graph regularized deep neural network with adversarial augmentation, named \textsc{DiagNet}. Firstly, we use adversarial learning to generate positive and negative mass-contained mammograms for each mass class. After that, a signed similarity graph is built upon the expanded data to further highlight the discrimination. Finally, a deep convolutional neural network is trained by jointly optimizing the signed graph regularization and classification loss. Experiments show that the \textsc{DiagNet} framework outperforms the state-of-the-art in breast mass diagnosis in mammography.Comment: To appear in MICCAI October 201

    COIN:Contrastive Identifier Network for Breast Mass Diagnosis in Mammography

    Get PDF
    Computer-aided breast cancer diagnosis in mammography is a challenging problem, stemming from mammographical data scarcity and data entanglement. In particular, data scarcity is attributed to the privacy and expensive annotation. And data entanglement is due to the high similarity between benign and malignant masses, of which manifolds reside in lower dimensional space with very small margin. To address these two challenges, we propose a deep learning framework, named Contrastive Identifier Network (\textsc{COIN}), which integrates adversarial augmentation and manifold-based contrastive learning. Firstly, we employ adversarial learning to create both on- and off-distribution mass contained ROIs. After that, we propose a novel contrastive loss with a built Signed graph. Finally, the neural network is optimized in a contrastive learning manner, with the purpose of improving the deep model's discriminativity on the extended dataset. In particular, by employing COIN, data samples from the same category are pulled close whereas those with different labels are pushed further in the deep latent space. Moreover, COIN outperforms the state-of-the-art related algorithms for solving breast cancer diagnosis problem by a considerable margin, achieving 93.4\% accuracy and 95.0\% AUC score. The code will release on ***

    Deep-FS: a feature selection algorithm for deep Boltzmann machines

    Get PDF
    A Deep Boltzmann Machine is a model of a Deep Neural Network formed from multiple layers of neurons with nonlinear activation functions. The structure of a Deep Boltzmann Machine enables it to learn very complex relationships between features and facilitates advanced performance in learning of high-level representation of features, compared to conventional Artificial Neural Networks. Feature selection at the input level of Deep Neural Networks has not been well studied, despite its importance in reducing the input features processed by the deep learning model, which facilitates understanding of the data. This paper proposes a novel algorithm, Deep Feature Selection (Deep-FS), which is capable of removing irrelevant features from large datasets in order to reduce the number of inputs which are modelled during the learning process. The proposed Deep-FS algorithm utilizes a Deep Boltzmann Machine, and uses knowledge which is acquired during training to remove features at the beginning of the learning process. Reducing inputs is important because it prevents the network from learning the associations between the irrelevant features which negatively impact on the acquired knowledge of the network about the overall distribution of the data. The Deep-FS method embeds feature selection in a Restricted Boltzmann Machine which is used for training a Deep Boltzmann Machine. The generative property of the Restricted Boltzmann Machine is used to reconstruct eliminated features and calculate reconstructed errors, in order to evaluate the impact of eliminating features. The performance of the proposed approach was evaluated with experiments conducted using the MNIST, MIR-Flickr, GISETTE, MADELON and PANCAN datasets. The results revealed that the proposed Deep-FS method enables improved feature selection without loss of accuracy on the MIR-Flickr dataset, where Deep-FS reduced the number of input features by removing 775 features without reduction in performance. With regards to the MNIST dataset, Deep-FS reduced the number of input features by more than 45%; it reduced the network error from 0.97% to 0.90%, and also reduced processing and classification time by more than 5.5%. Additionally, when compared to classical feature selection methods, Deep-FS returned higher accuracy. The experimental results on GISETTE, MADELON and PANCAN showed that Deep-FS reduced 81%, 57% and 77% of the number of input features, respectively. Moreover, the proposed feature selection method reduced the classifier training time by 82%, 70% and 85% on GISETTE, MADELON and PANCAN datasets, respectively. Experiments with various datasets, comprising a large number of features and samples, revealed that the proposed Deep-FS algorithm overcomes the main limitations of classical feature selection algorithms. More specifically, most classical methods require, as a prerequisite, a pre-specified number of features to retain, however in Deep-FS this number is identified automatically. Deep-FS performs the feature selection task faster than classical feature selection algorithms which makes it suitable for deep learning tasks. In addition, Deep-FS is suitable for finding features in large and big datasets which are normally stored in data batches for faster and more efficient processing
    • …
    corecore