885 research outputs found

    Inverting the Feature Visualization Process for Feedforward Neural Networks

    Full text link
    This work sheds light on the invertibility of feature visualization in neural networks. Since the input that is generated by feature visualization using activation maximization does, in general, not yield the feature objective it was optimized for, we investigate optimizing for the feature objective that yields this input. Given the objective function used in activation maximization that measures how closely a given input resembles the feature objective, we exploit that the gradient of this function w.r.t. inputs is---up to a scaling factor---linear in the objective. This observation is used to find the optimal feature objective via computing a closed form solution that minimizes the gradient. By means of Inverse Feature Visualization, we intend to provide an alternative view on a networks sensitivity to certain inputs that considers feature objectives rather than activations

    Understanding Autoencoders with Information Theoretic Concepts

    Full text link
    Despite their great success in practical applications, there is still a lack of theoretical and systematic methods to analyze deep neural networks. In this paper, we illustrate an advanced information theoretic methodology to understand the dynamics of learning and the design of autoencoders, a special type of deep learning architectures that resembles a communication channel. By generalizing the information plane to any cost function, and inspecting the roles and dynamics of different layers using layer-wise information quantities, we emphasize the role that mutual information plays in quantifying learning from data. We further suggest and also experimentally validate, for mean square error training, three fundamental properties regarding the layer-wise flow of information and intrinsic dimensionality of the bottleneck layer, using respectively the data processing inequality and the identification of a bifurcation point in the information plane that is controlled by the given data. Our observations have a direct impact on the optimal design of autoencoders, the design of alternative feedforward training methods, and even in the problem of generalization.Comment: Paper accepted by Neural Networks. Code for estimating information quantities and drawing the information plane is available from https://drive.google.com/drive/folders/1e5sIywZfmWp4Dn0WEesb6fqQRM0DIGxZ?usp=sharin

    Visualizing Deep Convolutional Neural Networks Using Natural Pre-Images

    Full text link
    Image representations, from SIFT and bag of visual words to Convolutional Neural Networks (CNNs) are a crucial component of almost all computer vision systems. However, our understanding of them remains limited. In this paper we study several landmark representations, both shallow and deep, by a number of complementary visualization techniques. These visualizations are based on the concept of "natural pre-image", namely a natural-looking image whose representation has some notable property. We study in particular three such visualizations: inversion, in which the aim is to reconstruct an image from its representation, activation maximization, in which we search for patterns that maximally stimulate a representation component, and caricaturization, in which the visual patterns that a representation detects in an image are exaggerated. We pose these as a regularized energy-minimization framework and demonstrate its generality and effectiveness. In particular, we show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.Comment: A substantially extended version of http://www.robots.ox.ac.uk/~vedaldi/assets/pubs/mahendran15understanding.pdf. arXiv admin note: text overlap with arXiv:1412.003

    Visualizing and Comparing Convolutional Neural Networks

    Full text link
    Convolutional Neural Networks (CNNs) have achieved comparable error rates to well-trained human on ILSVRC2014 image classification task. To achieve better performance, the complexity of CNNs is continually increasing with deeper and bigger architectures. Though CNNs achieved promising external classification behavior, understanding of their internal work mechanism is still limited. In this work, we attempt to understand the internal work mechanism of CNNs by probing the internal representations in two comprehensive aspects, i.e., visualizing patches in the representation spaces constructed by different layers, and visualizing visual information kept in each layer. We further compare CNNs with different depths and show the advantages brought by deeper architecture.Comment: 9 pages and 7 figures, submit to ICLR201

    How convolutional neural network see the world - A survey of convolutional neural network visualization methods

    Full text link
    Nowadays, the Convolutional Neural Networks (CNNs) have achieved impressive performance on many computer vision related tasks, such as object detection, image recognition, image retrieval, etc. These achievements benefit from the CNNs outstanding capability to learn the input features with deep layers of neuron structures and iterative training process. However, these learned features are hard to identify and interpret from a human vision perspective, causing a lack of understanding of the CNNs internal working mechanism. To improve the CNN interpretability, the CNN visualization is well utilized as a qualitative analysis method, which translates the internal features into visually perceptible patterns. And many CNN visualization works have been proposed in the literature to interpret the CNN in perspectives of network structure, operation, and semantic concept. In this paper, we expect to provide a comprehensive survey of several representative CNN visualization methods, including Activation Maximization, Network Inversion, Deconvolutional Neural Networks (DeconvNet), and Network Dissection based visualization. These methods are presented in terms of motivations, algorithms, and experiment results. Based on these visualization methods, we also discuss their practical applications to demonstrate the significance of the CNN interpretability in areas of network design, optimization, security enhancement, etc.Comment: 32 pages, 21 figures. Mathematical Foundations of Computin

    Confidential Inference via Ternary Model Partitioning

    Full text link
    Today's cloud vendors are competing to provide various offerings to simplify and accelerate AI service deployment. However, cloud users always have concerns about the confidentiality of their runtime data, which are supposed to be processed on third-party's compute infrastructures. Information disclosure of user-supplied data may jeopardize users' privacy and breach increasingly stringent data protection regulations. In this paper, we systematically investigate the life cycles of inference inputs in deep learning image classification pipelines and understand how the information could be leaked. Based on the discovered insights, we develop a Ternary Model Partitioning mechanism and bring trusted execution environments to mitigate the identified information leakages. Our research prototype consists of two co-operative components: (1) Model Assessment Framework, a local model evaluation and partitioning tool that assists cloud users in deployment preparation; (2) Infenclave, an enclave-based model serving system for online confidential inference in the cloud. We have conducted comprehensive security and performance evaluation on three representative ImageNet-level deep learning models with different network depths and architectural complexity. Our results demonstrate the feasibility of launching confidential inference services in the cloud with maximized confidentiality guarantees and low performance costs

    Measuring and Understanding Sensory Representations within Deep Networks Using a Numerical Optimization Framework

    Full text link
    A central challenge in sensory neuroscience is describing how the activity of populations of neurons can represent useful features of the external environment. However, while neurophysiologists have long been able to record the responses of neurons in awake, behaving animals, it is another matter entirely to say what a given neuron does. A key problem is that in many sensory domains, the space of all possible stimuli that one might encounter is effectively infinite; in vision, for instance, natural scenes are combinatorially complex, and an organism will only encounter a tiny fraction of possible stimuli. As a result, even describing the response properties of sensory neurons is difficult, and investigations of neuronal functions are almost always critically limited by the number of stimuli that can be considered. In this paper, we propose a closed-loop, optimization-based experimental framework for characterizing the response properties of sensory neurons, building on past efforts in closed-loop experimental methods, and leveraging recent advances in artificial neural networks to serve as as a proving ground for our techniques. Specifically, using deep convolutional neural networks, we asked whether modern black-box optimization techniques can be used to interrogate the "tuning landscape" of an artificial neuron in a deep, nonlinear system, without imposing significant constraints on the space of stimuli under consideration. We introduce a series of measures to quantify the tuning landscapes, and show how these relate to the performances of the networks in an object recognition task. To the extent that deep convolutional neural networks increasingly serve as de facto working hypotheses for biological vision, we argue that developing a unified approach for studying both artificial and biological systems holds great potential to advance both fields together

    Privacy-Preserving Deep Inference for Rich User Data on The Cloud

    Full text link
    Deep neural networks are increasingly being used in a variety of machine learning applications applied to rich user data on the cloud. However, this approach introduces a number of privacy and efficiency challenges, as the cloud operator can perform secondary inferences on the available data. Recently, advances in edge processing have paved the way for more efficient, and private, data processing at the source for simple tasks and lighter models, though they remain a challenge for larger, and more complicated models. In this paper, we present a hybrid approach for breaking down large, complex deep models for cooperative, privacy-preserving analytics. We do this by breaking down the popular deep architectures and fine-tune them in a particular way. We then evaluate the privacy benefits of this approach based on the information exposed to the cloud service. We also asses the local inference cost of different layers on a modern handset for mobile applications. Our evaluations show that by using certain kind of fine-tuning and embedding techniques and at a small processing costs, we can greatly reduce the level of information available to unintended tasks applied to the data feature on the cloud, and hence achieving the desired tradeoff between privacy and performance.Comment: arXiv admin note: substantial text overlap with arXiv:1703.0295

    Towards Better Analysis of Deep Convolutional Neural Networks

    Full text link
    Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.Comment: Submitted to VIS 201

    On Interpretability of Artificial Neural Networks: A Survey

    Full text link
    Deep learning as represented by the artificial deep neural networks (DNNs) has achieved great success in many important areas that deal with text, images, videos, graphs, and so on. However, the black-box nature of DNNs has become one of the primary obstacles for their wide acceptance in mission-critical applications such as medical diagnosis and therapy. Due to the huge potential of deep learning, interpreting neural networks has recently attracted much research attention. In this paper, based on our comprehensive taxonomy, we systematically review recent studies in understanding the mechanism of neural networks, describe applications of interpretability especially in medicine, and discuss future directions of interpretability research, such as in relation to fuzzy logic and brain science
    • …
    corecore