46,075 research outputs found
HyperAdam: A Learnable Task-Adaptive Adam for Network Training
Deep neural networks are traditionally trained using human-designed
stochastic optimization algorithms, such as SGD and Adam. Recently, the
approach of learning to optimize network parameters has emerged as a promising
research topic. However, these learned black-box optimizers sometimes do not
fully utilize the experience in human-designed optimizers, therefore have
limitation in generalization ability. In this paper, a new optimizer, dubbed as
\textit{HyperAdam}, is proposed that combines the idea of "learning to
optimize" and traditional Adam optimizer. Given a network for training, its
parameter update in each iteration generated by HyperAdam is an adaptive
combination of multiple updates generated by Adam with varying decay rates. The
combination weights and decay rates in HyperAdam are adaptively learned
depending on the task. HyperAdam is modeled as a recurrent neural network with
AdamCell, WeightCell and StateCell. It is justified to be state-of-the-art for
various network training, such as multilayer perceptron, CNN and LSTM
Deterministic networks for probabilistic computing
Neural-network models of high-level brain functions such as memory recall and
reasoning often rely on the presence of stochasticity. The majority of these
models assumes that each neuron in the functional network is equipped with its
own private source of randomness, often in the form of uncorrelated external
noise. However, both in vivo and in silico, the number of noise sources is
limited due to space and bandwidth constraints. Hence, neurons in large
networks usually need to share noise sources. Here, we show that the resulting
shared-noise correlations can significantly impair the performance of
stochastic network models. We demonstrate that this problem can be overcome by
using deterministic recurrent neural networks as sources of uncorrelated noise,
exploiting the decorrelating effect of inhibitory feedback. Consequently, even
a single recurrent network of a few hundred neurons can serve as a natural
noise source for large ensembles of functional networks, each comprising
thousands of units. We successfully apply the proposed framework to a diverse
set of binary-unit networks with different dimensionalities and entropies, as
well as to a network reproducing handwritten digits with distinct predefined
frequencies. Finally, we show that the same design transfers to functional
networks of spiking neurons.Comment: 22 pages, 11 figure
Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks
Autonomous robots need to interact with unknown, unstructured and changing
environments, constantly facing novel challenges. Therefore, continuous online
adaptation for lifelong-learning and the need of sample-efficient mechanisms to
adapt to changes in the environment, the constraints, the tasks, or the robot
itself are crucial. In this work, we propose a novel framework for
probabilistic online motion planning with online adaptation based on a
bio-inspired stochastic recurrent neural network. By using learning signals
which mimic the intrinsic motivation signalcognitive dissonance in addition
with a mental replay strategy to intensify experiences, the stochastic
recurrent network can learn from few physical interactions and adapts to novel
environments in seconds. We evaluate our online planning and adaptation
framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is
shown by learning unknown workspace constraints sample-efficiently from few
physical interactions while following given way points.Comment: accepted in Neural Network
- …