115,214 research outputs found

    Recovering Localized Adversarial Attacks

    Full text link
    Deep convolutional neural networks have achieved great successes over recent years, particularly in the domain of computer vision. They are fast, convenient, and -- thanks to mature frameworks -- relatively easy to implement and deploy. However, their reasoning is hidden inside a black box, in spite of a number of proposed approaches that try to provide human-understandable explanations for the predictions of neural networks. It is still a matter of debate which of these explainers are best suited for which situations, and how to quantitatively evaluate and compare them. In this contribution, we focus on the capabilities of explainers for convolutional deep neural networks in an extreme situation: a setting in which humans and networks fundamentally disagree. Deep neural networks are susceptible to adversarial attacks that deliberately modify input samples to mislead a neural network's classification, without affecting how a human observer interprets the input. Our goal with this contribution is to evaluate explainers by investigating whether they can identify adversarially attacked regions of an image. In particular, we quantitatively and qualitatively investigate the capability of three popular explainers of classifications -- classic salience, guided backpropagation, and LIME -- with respect to their ability to identify regions of attack as the explanatory regions for the (incorrect) prediction in representative examples from image classification. We find that LIME outperforms the other explainers

    Large scale image classification and object detection

    Get PDF
    Dissertation supervisor: Dr. Tony X. Han.Includes vita.Significant advancement of research on image classification and object detection has been achieved in the past decade. Deep convolutional neural networks have exhibited superior performance in many visual recognition tasks including image classification, object detection, and scene labeling, due to their large learning capacity and resistance to overfit. However, learning a robust deep CNN model for object recognition is still quite challenging because image classification and object detection is a severely unbalanced large-scale problem. In this dissertation, we aim at improving the performance of image classification and object detection algorithms by taking advantage of deep convolutional neural networks by utilizing the following strategies: We introduce Deep Neural Pattern, a local feature densely extracted from an image with arbitrary resolution using a well trained deep convolutional neural network. We propose a latent CNN framework, which will automatically select the most discriminate region in the image to reduce the effect of irrelevant regions. We also develop a new combination scheme for multiple CNNs via Latent Model Ensemble to overcome the local minima problem of CNNs. In addition, a weakly supervised CNN framework, referred to as Multiple Instance Learning Convolutional Neural Networks is developed to alleviate strict label requirements. Finally, a novel residual-network architecture, Residual networks of Residual networks, is constructed to improve the optimization ability of very deep convolutional neural networks. All the proposed algorithms are validated by thorough experiments and have shown solid accuracy on large scale object detection and recognition benchmarks.Includes bibliographical references (pages 105-119)

    An Adaptive Sampling Scheme to Efficiently Train Fully Convolutional Networks for Semantic Segmentation

    Full text link
    Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: 1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark
    corecore