207 research outputs found

    Diffusion of Context and Credit Information in Markovian Models

    Full text link
    This paper studies the problem of ergodicity of transition probability matrices in Markovian models, such as hidden Markov models (HMMs), and how it makes very difficult the task of learning to represent long-term context for sequential data. This phenomenon hurts the forward propagation of long-term context information, as well as learning a hidden state representation to represent long-term context, which depends on propagating credit information backwards in time. Using results from Markov chain theory, we show that this problem of diffusion of context and credit is reduced when the transition probabilities approach 0 or 1, i.e., the transition probability matrices are sparse and the model essentially deterministic. The results found in this paper apply to learning approaches based on continuous optimization, such as gradient descent and the Baum-Welch algorithm.Comment: See http://www.jair.org/ for any accompanying file

    Adversarial training to improve robustness of adversarial deep neural classifiers in the NOvA experiment

    Get PDF
    The NOvA experiment is a long-baseline neutrino oscillation experiment. Consisting of two functionally identical detectors situated off-axis in Fermilab’s NuMI neutrino beam. The Near Detector observes the unoscillated beam at Fermilab, while the Far Detector observes the oscillated beam 810 km away. This allows for measurements of the oscillation probabilities for multiple oscillation channels, ν_µ → ν_µ, anti ν_µ → anti ν_µ, ν_µ → ν_e and anti ν_µ → anti ν_e, leading to measurements of the neutrino oscillation parameters, sinθ_23, ∆m^2_32 and δ_CP. These measurements are produced from an extensive analysis of the recorded data. Deep neural networks are deployed at multiple stages of this analysis. The Event CVN network is deployed for the purposes of identifying and classifying the interaction types of selected neutrino events. The effects of systematic uncertainties present in the measurements on the network performance are investigated and are found to cause negligible variations. The robustness of these network trainings is therefore demonstrated which further justifies their current usage in the analysis beyond the standard validation. The effects on the network performance for larger systematic alterations to the training datasets beyond the systematic uncertainties, such as an exchange of the neutrino event generators, are investigated. The differences in network performance corresponding to the introduced variations are found to be minimal. Domain adaptation techniques are implemented in the AdCVN framework. These methods are deployed for the purpose of improving the Event CVN robustness for scenarios with systematic variations in the underlying data

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar

    A deep learning theory for neural networks grounded in physics

    Full text link
    Au cours de la dernière décennie, l'apprentissage profond est devenu une composante majeure de l'intelligence artificielle, ayant mené à une série d'avancées capitales dans une variété de domaines. L'un des piliers de l'apprentissage profond est l'optimisation de fonction de coût par l'algorithme du gradient stochastique (SGD). Traditionnellement en apprentissage profond, les réseaux de neurones sont des fonctions mathématiques différentiables, et les gradients requis pour l'algorithme SGD sont calculés par rétropropagation. Cependant, les architectures informatiques sur lesquelles ces réseaux de neurones sont implémentés et entraînés souffrent d’inefficacités en vitesse et en énergie, dues à la séparation de la mémoire et des calculs dans ces architectures. Pour résoudre ces problèmes, le neuromorphique vise à implementer les réseaux de neurones dans des architectures qui fusionnent mémoire et calculs, imitant plus fidèlement le cerveau. Dans cette thèse, nous soutenons que pour construire efficacement des réseaux de neurones dans des architectures neuromorphiques, il est nécessaire de repenser les algorithmes pour les implémenter et les entraîner. Nous présentons un cadre mathématique alternative, compatible lui aussi avec l’algorithme SGD, qui permet de concevoir des réseaux de neurones dans des substrats qui exploitent mieux les lois de la physique. Notre cadre mathématique s'applique à une très large classe de modèles, à savoir les systèmes dont l'état ou la dynamique sont décrits par des équations variationnelles. La procédure pour calculer les gradients de la fonction de coût dans de tels systèmes (qui dans de nombreux cas pratiques ne nécessite que de l'information locale pour chaque paramètre) est appelée “equilibrium propagation” (EqProp). Comme beaucoup de systèmes en physique et en ingénierie peuvent être décrits par des principes variationnels, notre cadre mathématique peut potentiellement s'appliquer à une grande variété de systèmes physiques, dont les applications vont au delà du neuromorphique et touchent divers champs d'ingénierie.In the last decade, deep learning has become a major component of artificial intelligence, leading to a series of breakthroughs across a wide variety of domains. The workhorse of deep learning is the optimization of loss functions by stochastic gradient descent (SGD). Traditionally in deep learning, neural networks are differentiable mathematical functions, and the loss gradients required for SGD are computed with the backpropagation algorithm. However, the computer architectures on which these neural networks are implemented and trained suffer from speed and energy inefficiency issues, due to the separation of memory and processing in these architectures. To solve these problems, the field of neuromorphic computing aims at implementing neural networks on hardware architectures that merge memory and processing, just like brains do. In this thesis, we argue that building large, fast and efficient neural networks on neuromorphic architectures also requires rethinking the algorithms to implement and train them. We present an alternative mathematical framework, also compatible with SGD, which offers the possibility to design neural networks in substrates that directly exploit the laws of physics. Our framework applies to a very broad class of models, namely those whose state or dynamics are described by variational equations. This includes physical systems whose equilibrium state minimizes an energy function, and physical systems whose trajectory minimizes an action functional (principle of least action). We present a simple procedure to compute the loss gradients in such systems, called equilibrium propagation (EqProp), which requires solely locally available information for each trainable parameter. Since many models in physics and engineering can be described by variational principles, our framework has the potential to be applied to a broad variety of physical systems, whose applications extend to various fields of engineering, beyond neuromorphic computing

    Dynamic Mathematics for Automated Machine Learning Techniques

    Get PDF
    Machine Learning and Neural Networks have been gaining popularity and are widely considered as the driving force of the Fourth Industrial Revolution. However, modern machine learning techniques such as backpropagation training was firmly established in 1986 while computer vision was revolutionised in 2012 with the introduction of AlexNet. Given all these accomplishments, why are neural networks still not an integral part of our society? ``Because they are difficult to implement in practice.'' I'd like to use machine learning, but I can't invest much time. The concept of Automated Machine Learning (AutoML) was first proposed by Professor Frank Hutter of the University of Freiburg. Machine learning is not simple; it requires a practitioner to have thorough understanding on the attributes of their data and the components which their model entails. AutoML is the effort to automate all tedious aspects of machine learning to form a clean data analysis pipeline. This thesis is our effort to develop and to understand ways to automate machine learning. Specifically, we focused on Recurrent Neural Networks (RNNs), Meta-Learning, and Continual Learning. We studied continual learning to enable a network to sequentially acquire skills in a dynamic environment; we studied meta-learning to understand how a network can be configured efficiently; and we studied RNNs to understand the consequences of consecutive actions. Our RNN-study focused on mathematical interpretability. We described a large variety of RNNs as one mathematical class to understand their core network mechanism. This enabled us to extend meta-learning beyond network configuration for network pruning and continual learning. This also provided insights for us to understand how a single network should be consecutively configured and led us to the creation of a simple generic patch that is compatible to several existing continual learning archetypes. This patch enhanced the robustness of continual learning techniques and allowed them to generalise data better. By and large, this thesis presented a series of extensions to enable AutoML to be made simple, efficient, and robust. More importantly, all of our methods are motivated with mathematical understandings through the lens of dynamical systems. Thus, we also increased the interpretability of AutoML concepts

    Towards Efficient Lifelong Machine Learning in Deep Neural Networks

    Get PDF
    Humans continually learn and adapt to new knowledge and environments throughout their lifetimes. Rarely does learning new information cause humans to catastrophically forget previous knowledge. While deep neural networks (DNNs) now rival human performance on several supervised machine perception tasks, when updated on changing data distributions, they catastrophically forget previous knowledge. Enabling DNNs to learn new information over time opens the door for new applications such as self-driving cars that adapt to seasonal changes or smartphones that adapt to changing user preferences. In this dissertation, we propose new methods and experimental paradigms for efficiently training continual DNNs without forgetting. We then apply these methods to several visual and multi-modal perception tasks including image classification, visual question answering, analogical reasoning, and attribute and relationship prediction in visual scenes

    Diabetic Retinopathy Classification and Interpretation using Deep Learning Techniques

    Get PDF
    La retinopatia diabètica és una malaltia crònica i una de les principals causes de ceguesa i discapacitat visual en els pacients diabètics. L'examen ocular a través d'imatges de la retina és utilitzat pels metges per detectar les lesions relacionades amb aquesta malaltia. En aquesta tesi, explorem diferents mètodes innovadors per a la classificació automàtica del grau de malaltia utilitzant imatges del fons d'ull. Per a aquest propòsit, explorem mètodes basats en l'extracció i classificació automàtica, basades en xarxes neuronals profundes. A més, dissenyem un nou mètode per a la interpretació dels resultats. El model està concebut de manera modular per a que pugui ser utilitzat en d'altres xarxes i dominis de classificació. Demostrem experimentalment que el nostre model d'interpretació és capaç de detectar lesions de retina a la imatge únicament a partir de la informació de classificació. A més, proposem un mètode per comprimir la representació interna de la informació de la xarxa. El mètode es basa en una anàlisi de components independents sobre la informació del vector d'atributs intern de la xarxa generat pel model per a cada imatge. Usant el nostre mètode d'interpretació esmentat anteriorment també és possible visualitzar aquests components en la imatge. Finalment, presentem una aplicació experimental del nostre millor model per classificar imatges de retina d'una població diferent, concretament de l'Hospital de Reus. Els mètodes proposats arriben al nivell de rendiment de l'oftalmòleg i són capaços d'identificar amb gran detall les lesions presents en les imatges, que es dedueixen només de la informació de classificació de la imatge.La retinopatía diabética es una enfermedad crónica y una de las principales causas de ceguera y discapacidad visual en los pacientes diabéticos. El examen ocular a través de imágenes de la retina es utilizado por los médicos para detectar las lesiones relacionadas con esta enfermedad. En esta tesis, exploramos diferentes métodos novedosos para la clasificación automática del grado de enfermedad utilizando imágenes del fondo de la retina. Para este propósito, exploramos métodos basados en la extracción y clasificación automática, basadas en redes neuronales profundas. Además, diseñamos un nuevo método para la interpretación de los resultados. El modelo está concebido de manera modular para que pueda ser utilizado utilizando otras redes y dominios de clasificación. Demostramos experimentalmente que nuestro modelo de interpretación es capaz de detectar lesiones de retina en la imagen únicamente a partir de la información de clasificación. Además, proponemos un método para comprimir la representación interna de la información de la red. El método se basa en un análisis de componentes independientes sobre la información del vector de atributos interno de la red generado por el modelo para cada imagen. Usando nuestro método de interpretación mencionado anteriormente también es posible visualizar dichos componentes en la imagen. Finalmente, presentamos una aplicación experimental de nuestro mejor modelo para clasificar imágenes de retina de una población diferente, concretamente del Hospital de Reus. Los métodos propuestos alcanzan el nivel de rendimiento del oftalmólogo y son capaces de identificar con gran detalle las lesiones presentes en las imágenes, que se deducen solo de la información de clasificación de la imagen.Diabetic Retinopathy is a chronic disease and one of the main causes of blindness and visual impairment for diabetic patients. Eye screening through retinal images is used by physicians to detect the lesions related with this disease. In this thesis, we explore different novel methods for the automatic diabetic retinopathy disease grade classification using retina fundus images. For this purpose, we explore methods based in automatic feature extraction and classification, based on deep neural networks. Furthermore, as results reported by these models are difficult to interpret, we design a new method for results interpretation. The model is designed in a modular manner in order to generalize its possible application to other networks and classification domains. We experimentally demonstrate that our interpretation model is able to detect retina lesions in the image solely from the classification information. Additionally, we propose a method for compressing model feature-space information. The method is based on a independent component analysis over the disentangled feature space information generated by the model for each image and serves also for identifying the mathematically independent elements causing the disease. Using our previously mentioned interpretation method is also possible to visualize such components on the image. Finally, we present an experimental application of our best model for classifying retina images of a different population, concretely from the Hospital de Reus. The methods proposed, achieve ophthalmologist performance level and are able to identify with great detail lesions present on images, inferred only from image classification information
    corecore