205 research outputs found
A Survey on Continuous Time Computations
We provide an overview of theories of continuous time computation. These
theories allow us to understand both the hardness of questions related to
continuous time dynamical systems and the computational power of continuous
time analog models. We survey the existing models, summarizing results, and
point to relevant references in the literature
On the possible Computational Power of the Human Mind
The aim of this paper is to address the question: Can an artificial neural
network (ANN) model be used as a possible characterization of the power of the
human mind? We will discuss what might be the relationship between such a model
and its natural counterpart. A possible characterization of the different power
capabilities of the mind is suggested in terms of the information contained (in
its computational complexity) or achievable by it. Such characterization takes
advantage of recent results based on natural neural networks (NNN) and the
computational power of arbitrary artificial neural networks (ANN). The possible
acceptance of neural networks as the model of the human mind's operation makes
the aforementioned quite relevant.Comment: Complexity, Science and Society Conference, 2005, University of
Liverpool, UK. 23 page
Principles of Neuromorphic Photonics
In an age overrun with information, the ability to process reams of data has
become crucial. The demand for data will continue to grow as smart gadgets
multiply and become increasingly integrated into our daily lives.
Next-generation industries in artificial intelligence services and
high-performance computing are so far supported by microelectronic platforms.
These data-intensive enterprises rely on continual improvements in hardware.
Their prospects are running up against a stark reality: conventional
one-size-fits-all solutions offered by digital electronics can no longer
satisfy this need, as Moore's law (exponential hardware scaling),
interconnection density, and the von Neumann architecture reach their limits.
With its superior speed and reconfigurability, analog photonics can provide
some relief to these problems; however, complex applications of analog
photonics have remained largely unexplored due to the absence of a robust
photonic integration industry. Recently, the landscape for
commercially-manufacturable photonic chips has been changing rapidly and now
promises to achieve economies of scale previously enjoyed solely by
microelectronics.
The scientific community has set out to build bridges between the domains of
photonic device physics and neural networks, giving rise to the field of
\emph{neuromorphic photonics}. This article reviews the recent progress in
integrated neuromorphic photonics. We provide an overview of neuromorphic
computing, discuss the associated technology (microelectronic and photonic)
platforms and compare their metric performance. We discuss photonic neural
network approaches and challenges for integrated neuromorphic photonic
processors while providing an in-depth description of photonic neurons and a
candidate interconnection architecture. We conclude with a future outlook of
neuro-inspired photonic processing.Comment: 28 pages, 19 figure
Effect of dilution in asymmetric recurrent neural networks
We study with numerical simulation the possible limit behaviors of
synchronous discrete-time deterministic recurrent neural networks composed of N
binary neurons as a function of a network's level of dilution and asymmetry.
The network dilution measures the fraction of neuron couples that are
connected, and the network asymmetry measures to what extent the underlying
connectivity matrix is asymmetric. For each given neural network, we study the
dynamical evolution of all the different initial conditions, thus
characterizing the full dynamical landscape without imposing any learning rule.
Because of the deterministic dynamics, each trajectory converges to an
attractor, that can be either a fixed point or a limit cycle. These attractors
form the set of all the possible limit behaviors of the neural network. For
each network, we then determine the convergence times, the limit cycles'
length, the number of attractors, and the sizes of the attractors' basin. We
show that there are two network structures that maximize the number of possible
limit behaviors. The first optimal network structure is fully-connected and
symmetric. On the contrary, the second optimal network structure is highly
sparse and asymmetric. The latter optimal is similar to what observed in
different biological neuronal circuits. These observations lead us to
hypothesize that independently from any given learning model, an efficient and
effective biologic network that stores a number of limit behaviors close to its
maximum capacity tends to develop a connectivity structure similar to one of
the optimal networks we found.Comment: 31 pages, 5 figure
State-Dependent Computation Using Coupled Recurrent Networks
Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how
networks of richly interconnected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable, robust finite state machines. We show how a multistable neuronal network containing a number of states can be created very simply by coupling two recurrent
networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogeneous, locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicited that state iswithdrawn. In addition, a small number of transition neurons implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit
Multilayer optical learning networks
A new approach to learning in a multilayer optical neural network based on holographically interconnected nonlinear devices is presented. The proposed network can learn the interconnections that form a distributed representation of a desired pattern transformation operation. The interconnections are formed in an adaptive and self-aligning fashioias volume holographic gratings in photorefractive crystals. Parallel arrays of globally space-integrated inner products diffracted by the interconnecting hologram illuminate arrays of nonlinear Fabry-Perot etalons for fast thresholding of the transformed patterns. A phase conjugated reference wave interferes with a backward propagating error signal to form holographic interference patterns which are time integrated in the volume of a photorefractive crystal to modify slowly and learn the appropriate self-aligning interconnections. This multilayer system performs an approximate implementation of the backpropagation learning procedure in a massively parallel high-speed nonlinear optical network
HpGAN: Sequence Search with Generative Adversarial Networks
Sequences play an important role in many engineering applications and
systems. Searching sequences with desired properties has long been an
interesting but also challenging research topic. This article proposes a novel
method, called HpGAN, to search desired sequences algorithmically using
generative adversarial networks (GAN). HpGAN is based on the idea of zero-sum
game to train a generative model, which can generate sequences with
characteristics similar to the training sequences. In HpGAN, we design the
Hopfield network as an encoder to avoid the limitations of GAN in generating
discrete data. Compared with traditional sequence construction by algebraic
tools, HpGAN is particularly suitable for intractable problems with complex
objectives which prevent mathematical analysis. We demonstrate the search
capabilities of HpGAN in two applications: 1) HpGAN successfully found many
different mutually orthogonal complementary code sets (MOCCS) and optimal
odd-length Z-complementary pairs (OB-ZCPs) which are not part of the training
set. In the literature, both MOCSSs and OB-ZCPs have found wide applications in
wireless communications. 2) HpGAN found new sequences which achieve four-times
increase of signal-to-interference ratio--benchmarked against the well-known
Legendre sequence--of a mismatched filter (MMF) estimator in pulse compression
radar systems. These sequences outperform those found by AlphaSeq.Comment: 12 pages, 16 figure
- …