89 research outputs found
When the state of the art is ahead of the state of understanding : unintuitive properties of deep neural networks
Deep learning is an undeniably hot topic, not only within both academia and industry, but also among society and the media. The reasons for the advent of its popularity are manifold: unprecedented availability of data and computing power, some innovative methodologies, minor but significant technical tricks, etc. However, interestingly, the current success and practice of deep learning seems to be uncorrelated with its theoretical, more formal understanding. And with that, deep learning?s state-of-the-art presents a number of unintuitive properties or situations. In this note, I highlight some of these unintuitive properties, trying to show relevant recent work, and expose the need to get insight into them, either by formal or more empirical means
Lifelong Neural Predictive Coding: Learning Cumulatively Online without Forgetting
In lifelong learning systems, especially those based on artificial neural
networks, one of the biggest obstacles is the severe inability to retain old
knowledge as new information is encountered. This phenomenon is known as
catastrophic forgetting. In this article, we propose a new kind of
connectionist architecture, the Sequential Neural Coding Network, that is
robust to forgetting when learning from streams of data points and, unlike
networks of today, does not learn via the immensely popular back-propagation of
errors. Grounded in the neurocognitive theory of predictive processing, our
model adapts its synapses in a biologically-plausible fashion, while another,
complementary neural system rapidly learns to direct and control this
cortex-like structure by mimicking the task-executive control functionality of
the basal ganglia. In our experiments, we demonstrate that our self-organizing
system experiences significantly less forgetting as compared to standard neural
models and outperforms a wide swath of previously proposed methods even though
it is trained across task datasets in a stream-like fashion. The promising
performance of our complementary system on benchmarks, e.g., SplitMNIST, Split
Fashion MNIST, and Split NotMNIST, offers evidence that by incorporating
mechanisms prominent in real neuronal systems, such as competition, sparse
activation patterns, and iterative input processing, a new possibility for
tackling the grand challenge of lifelong machine learning opens up.Comment: Key updates including results on standard benchmarks, e.g., split
mnist/fmnist/not-mnist. Task selection/basal ganglia model has been
integrate
Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting
We introduce the Kronecker factored online Laplace approximation for
overcoming catastrophic forgetting in neural networks. The method is grounded
in a Bayesian online learning framework, where we recursively approximate the
posterior after every task with a Gaussian, leading to a quadratic penalty on
changes to the weights. The Laplace approximation requires calculating the
Hessian around a mode, which is typically intractable for modern architectures.
In order to make our method scalable, we leverage recent block-diagonal
Kronecker factored approximations to the curvature. Our algorithm achieves over
90% test accuracy across a sequence of 50 instantiations of the permuted MNIST
dataset, substantially outperforming related methods for overcoming
catastrophic forgetting.Comment: 13 pages, 6 figure
DSS: A Diverse Sample Selection Method to Preserve Knowledge in Class-Incremental Learning
Rehearsal-based techniques are commonly used to mitigate catastrophic
forgetting (CF) in Incremental learning (IL). The quality of the exemplars
selected is important for this purpose and most methods do not ensure the
appropriate diversity of the selected exemplars. We propose a new technique
"DSS" -- Diverse Selection of Samples from the input data stream in the
Class-incremental learning (CIL) setup under both disjoint and fuzzy task
boundary scenarios. Our method outperforms state-of-the-art methods and is much
simpler to understand and implement
- …