1,560 research outputs found
A Broad Class of Discrete-Time Hypercomplex-Valued Hopfield Neural Networks
In this paper, we address the stability of a broad class of discrete-time
hypercomplex-valued Hopfield-type neural networks. To ensure the neural
networks belonging to this class always settle down at a stationary state, we
introduce novel hypercomplex number systems referred to as real-part
associative hypercomplex number systems. Real-part associative hypercomplex
number systems generalize the well-known Cayley-Dickson algebras and real
Clifford algebras and include the systems of real numbers, complex numbers,
dual numbers, hyperbolic numbers, quaternions, tessarines, and octonions as
particular instances. Apart from the novel hypercomplex number systems, we
introduce a family of hypercomplex-valued activation functions called
-projection functions. Broadly speaking, a
-projection function projects the activation potential onto the
set of all possible states of a hypercomplex-valued neuron. Using the theory
presented in this paper, we confirm the stability analysis of several
discrete-time hypercomplex-valued Hopfield-type neural networks from the
literature. Moreover, we introduce and provide the stability analysis of a
general class of Hopfield-type neural networks on Cayley-Dickson algebras
Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation
We introduce Equilibrium Propagation, a learning framework for energy-based
models. It involves only one kind of neural computation, performed in both the
first phase (when the prediction is made) and the second phase of training
(after the target or prediction error is revealed). Although this algorithm
computes the gradient of an objective function just like Backpropagation, it
does not need a special computation or circuit for the second phase, where
errors are implicitly propagated. Equilibrium Propagation shares similarities
with Contrastive Hebbian Learning and Contrastive Divergence while solving the
theoretical issues of both algorithms: our algorithm computes the gradient of a
well defined objective function. Because the objective function is defined in
terms of local perturbations, the second phase of Equilibrium Propagation
corresponds to only nudging the prediction (fixed point, or stationary
distribution) towards a configuration that reduces prediction error. In the
case of a recurrent multi-layer supervised network, the output units are
slightly nudged towards their target in the second phase, and the perturbation
introduced at the output layer propagates backward in the hidden layers. We
show that the signal 'back-propagated' during this second phase corresponds to
the propagation of error derivatives and encodes the gradient of the objective
function, when the synaptic update corresponds to a standard form of
spike-timing dependent plasticity. This work makes it more plausible that a
mechanism similar to Backpropagation could be implemented by brains, since
leaky integrator neural computation performs both inference and error
back-propagation in our model. The only local difference between the two phases
is whether synaptic changes are allowed or not
Training issues and learning algorithms for feedforward and recurrent neural networks
Ph.DDOCTOR OF PHILOSOPH
HpGAN: Sequence Search with Generative Adversarial Networks
Sequences play an important role in many engineering applications and
systems. Searching sequences with desired properties has long been an
interesting but also challenging research topic. This article proposes a novel
method, called HpGAN, to search desired sequences algorithmically using
generative adversarial networks (GAN). HpGAN is based on the idea of zero-sum
game to train a generative model, which can generate sequences with
characteristics similar to the training sequences. In HpGAN, we design the
Hopfield network as an encoder to avoid the limitations of GAN in generating
discrete data. Compared with traditional sequence construction by algebraic
tools, HpGAN is particularly suitable for intractable problems with complex
objectives which prevent mathematical analysis. We demonstrate the search
capabilities of HpGAN in two applications: 1) HpGAN successfully found many
different mutually orthogonal complementary code sets (MOCCS) and optimal
odd-length Z-complementary pairs (OB-ZCPs) which are not part of the training
set. In the literature, both MOCSSs and OB-ZCPs have found wide applications in
wireless communications. 2) HpGAN found new sequences which achieve four-times
increase of signal-to-interference ratio--benchmarked against the well-known
Legendre sequence--of a mismatched filter (MMF) estimator in pulse compression
radar systems. These sequences outperform those found by AlphaSeq.Comment: 12 pages, 16 figure
Proceedings, MSVSCC 2018
Proceedings of the 12th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 19, 2018 at VMASC in Suffolk, Virginia. 155 pp
- …