13,344 research outputs found
Gated networks: an inventory
Gated networks are networks that contain gating connections, in which the
outputs of at least two neurons are multiplied. Initially, gated networks were
used to learn relationships between two input sources, such as pixels from two
images. More recently, they have been applied to learning activity recognition
or multi-modal representations. The aims of this paper are threefold: 1) to
explain the basic computations in gated networks to the non-expert, while
adopting a standpoint that insists on their symmetric nature. 2) to serve as a
quick reference guide to the recent literature, by providing an inventory of
applications of these networks, as well as recent extensions to the basic
architecture. 3) to suggest future research directions and applications.Comment: Unpublished manuscript, 17 page
A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines
Information in neural networks is represented as weighted connections, or
synapses, between neurons. This poses a problem as the primary computational
bottleneck for neural networks is the vector-matrix multiply when inputs are
multiplied by the neural network weights. Conventional processing architectures
are not well suited for simulating neural networks, often requiring large
amounts of energy and time. Additionally, synapses in biological neural
networks are not binary connections, but exhibit a nonlinear response function
as neurotransmitters are emitted and diffuse between neurons. Inspired by
neuroscience principles, we present a digital neuromorphic architecture, the
Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex
synaptic response functions without requiring additional hardware components.
We consider the paradigm of spiking neurons with temporally coded information
as opposed to non-spiking rate coded neurons used in most neural networks. In
this paradigm we examine liquid state machines applied to speech recognition
and show how a liquid state machine with temporal dynamics maps onto the
STPU-demonstrating the flexibility and efficiency of the STPU for instantiating
neural algorithms.Comment: 8 pages, 4 Figures, Preprint of 2017 IJCN
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
Connectionist Inference Models
The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling
Deep Learning: Our Miraculous Year 1990-1991
In 2020, we will celebrate that many of the basic ideas behind the deep
learning revolution were published three decades ago within fewer than 12
months in our "Annus Mirabilis" or "Miraculous Year" 1990-1991 at TU Munich.
Back then, few people were interested, but a quarter century later, neural
networks based on these ideas were on over 3 billion devices such as
smartphones, and used many billions of times per day, consuming a significant
fraction of the world's compute.Comment: 37 pages, 188 references, based on work of 4 Oct 201
- …