395 research outputs found

    Noise-Tolerant Deep Learning for Histopathological Image Segmentation

    Get PDF
    Developing an effective algorithm based on the handcrafted features from histological images (histo-images) is difficult due to the complexity of histo-images. Deep network models have achieved promising performances, as it is capable of capturing high-level features. However, a major hurdle hindering the application of deep learning in histo-image segmentation is to obtain large ground-truth data for training. Taking the segmentations from simple off-the-shelf algorithms as training data will be a new way to address this hurdle. The output from the off-the-shelf segmentations is considered to be noisy data, which requires a new learning scheme for deep learning segmentation. Existing works on noisy label deep learning are largely for image classification. In this thesis, we study whether and how integrating imperfect or noisy “ground-truth” from off-the-shelf segmentation algorithms may help achieve better performance so that the deep learning can be applied to histo-image segmentation with the manageable effort. Two noise-tolerant deep learning architectures are proposed in this thesis. One is based on the Noisy at Random (NAR) Model, and the other is based on the Noisy Not at Random (NNAR) Model. The largest difference between the two is that NNAR based architecture assumes the label noise is dependent on features of the image. Unlike most existing works, we study how to integrate multiple types of noisy data into one specific model. The proposed method has extensive application when segmentations from multiple off-the-shelf algorithms are available. The implementation of the NNAR based architecture demonstrates its effectiveness and superiority over off-the-shelf and other existing deep-learningbased image segmentation algorithms

    Sparse Matrix-based Random Projection for Classification

    Full text link
    As a typical dimensionality reduction technique, random projection can be simply implemented with linear projection, while maintaining the pairwise distances of high-dimensional data with high probability. Considering this technique is mainly exploited for the task of classification, this paper is developed to study the construction of random matrix from the viewpoint of feature selection, rather than of traditional distance preservation. This yields a somewhat surprising theoretical result, that is, the sparse random matrix with exactly one nonzero element per column, can present better feature selection performance than other more dense matrices, if the projection dimension is sufficiently large (namely, not much smaller than the number of feature elements); otherwise, it will perform comparably to others. For random projection, this theoretical result implies considerable improvement on both complexity and performance, which is widely confirmed with the classification experiments on both synthetic data and real data

    Two-axis-twisting spin squeezing by multi-pass quantum erasure

    Get PDF
    Many-body entangled states are key elements in quantum information science and quantum metrology. One important problem in establishing a high degree of many-body entanglement using optical techniques is the leakage of the system information via the light that creates such entanglement. We propose an all-optical interference-based approach to erase this information. Unwanted atom-light entanglement can be removed by destructive interference of three or more successive atom-light interactions, with only the desired effective atom-atom interaction left. This quantum erasure protocol allows implementation of Heisenberg-limited spin squeezing using coherent light and a cold or warm atomic ensemble. Calculations show that significant improvement in the squeezing exceeding 10 dB is obtained compared to previous methods, and substantial spin squeezing is attainable even under moderate experimental conditions. Our method enables the efficient creation of many-body entangled states with simple setups, and thus is promising for advancing technologies in quantum metrology and quantum information processing.Comment: 10 pages, 4 figures. We have improved the presentation and added a new section, in which we have generalized the scheme from a three-pass scheme to multi-pass schem

    Design and evaluation of advanced collusion attacks on collaborative intrusion detection networks in practice

    Get PDF
    Joint 15th IEEE International Conference on Trust, Security and Privacy in Computing and Communications, 10th IEEE International Conference on Big Data Science and Engineering and 14th IEEE International Symposium on Parallel and Distributed Processing with Applications, IEEE TrustCom/BigDataSE/ISPA 2016, Tianjin, China, 23-26 August 2016To encourage collaboration among single intrusion detection systems (IDSs), collaborative intrusion detection networks (CIDNs) have been developed that enable different IDS nodes to communicate information with each other. This distributed network infrastructure aims to improve the detection performance of a single IDS, but may suffer from various insider attacks like collusion attacks, where several malicious nodes can collaborate to perform adversary actions. To defend against insider threats, challenge-based trust mechanisms have been proposed in the literature and proven to be robust against collusion attacks. However, we identify that such mechanisms depend heavily on an assumption of malicious nodes, which is not likely to be realistic and may lead to a weak threat model in practical scenarios. In this paper, we analyze the robustness of challenge-based CIDNs in real-world applications and present an advanced collusion attack, called random poisoning attack, which derives from the existing attacks. In the evaluation, we investigate the attack performance in both simulated and real CIDN environments. Experimental results demonstrate that our attack can enables a malicious node to send untruthful information without decreasing its trust value at large. Our research attempts to stimulate more research in designing more robust CIDN framework in practice.Department of Computing2016-2017 > Academic research: refereed > Refereed conference paperbcw

    Noise-Tolerant Deep Learning for Histopathological Image Segmentation

    Get PDF
    Developing an effective algorithm based on the handcrafted features from histological images (histo-images) is difficult due to the complexity of histo-images. Deep network models have achieved promising performances, as it is capable of capturing high-level features. However, a major hurdle hindering the application of deep learning in histo-image segmentation is to obtain large ground-truth data for training. Taking the segmentations from simple off-the-shelf algorithms as training data will be a new way to address this hurdle. The output from the off-the-shelf segmentations is considered to be noisy data, which requires a new learning scheme for deep learning segmentation. Existing works on noisy label deep learning are largely for image classification. In this thesis, we study whether and how integrating imperfect or noisy “ground-truth” from off-the-shelf segmentation algorithms may help achieve better performance so that the deep learning can be applied to histo-image segmentation with the manageable effort. Two noise-tolerant deep learning architectures are proposed in this thesis. One is based on the Noisy at Random (NAR) Model, and the other is based on the Noisy Not at Random (NNAR) Model. The largest difference between the two is that NNAR based architecture assumes the label noise is dependent on features of the image. Unlike most existing works, we study how to integrate multiple types of noisy data into one specific model. The proposed method has extensive application when segmentations from multiple off-the-shelf algorithms are available. The implementation of the NNAR based architecture demonstrates its effectiveness and superiority over off-the-shelf and other existing deep-learningbased image segmentation algorithms

    Investigating the influence of special on-off attacks on challenge-based collaborative intrusion detection networks

    Get PDF
    Intrusions are becoming more complicated with the recent development of adversarial techniques. To boost the detection accuracy of a separate intrusion detector, the collaborative intrusion detection network (CIDN) has thus been developed by allowing intrusion detection system (IDS) nodes to exchange data with each other. Insider attacks are a great threat for such types of collaborative networks, where an attacker has the authorized access within the network. In literature, a challenge-based trust mechanism is effective at identifying malicious nodes by sending challenges. However, such mechanisms are heavily dependent on two assumptions, which would cause CIDNs to be vulnerable to advanced insider attacks in practice. In this work, we investigate the influence of advanced on–off attacks on challenge-based CIDNs, which can respond truthfully to one IDS node but behave maliciously to another IDS node. To evaluate the attack performance, we have conducted two experiments under a simulated and a real CIDN environment. The obtained results demonstrate that our designed attack is able to compromise the robustness of challenge-based CIDNs in practice; that is, some malicious nodes can behave untruthfully without a timely detection

    MIAEC: Missing data imputation based on the evidence Chain

    Get PDF
    © 2013 IEEE. Missing or incorrect data caused by improper operations can seriously compromise security investigation. Missing data can not only damage the integrity of the information but also lead to the deviation of the data mining and analysis. Therefore, it is necessary to implement the imputation of missing value in the phase of data preprocessing to reduce the possibility of data missing as a result of human error and operations. The performances of existing imputation approaches of missing value cannot satisfy the analysis requirements due to its low accuracy and poor stability, especially the rapid decreasing imputation accuracy with the increasing rate of missing data. In this paper, we propose a novel missing value imputation algorithm based on the evidence chain (MIAEC), which first mines all relevant evidence of missing values in each data tuple and then combines this relevant evidence to build the evidence chain for further estimation of missing values. To extend MIAEC for large-scale data processing, we apply the map-reduce programming model to realize the distribution and parallelization of MIAEC. Experimental results show that the proposed approach can provide higher imputation accuracy compared with the missing data imputation algorithm based on naive Bayes, the mode imputation algorithm, and the proposed missing data imputation algorithm based on K-nearest neighbor. MIAEC has higher imputation accuracy and its imputation accuracy is also assured with the increasing rate of missing value or the position change of missing value. MIAEC is also proved to be suitable for the distributed computing platform and can achieve an ideal speedup ratio
    corecore