1,221 research outputs found
Computational Capabilities of Analog and Evolving Neural Networks over Infinite Input Streams
International audienceAnalog and evolving recurrent neural networks are super-Turing powerful. Here, we consider analog and evolving neural nets over infinite input streams. We then characterize the topological complexity of their ω-languages as a function of the specific analog or evolving weights that they employ. As a consequence, two infinite hierarchies of classes of analog and evolving neural networks based on the complexity of their underlying weights can be derived. These results constitute an optimal refinement of the super-Turing expressive power of analog and evolving neural networks. They show that analog and evolving neural nets represent natural models for oracle-based infinite computation
Deep learning for video game playing
In this article, we review recent Deep Learning advances in the context of
how they have been applied to play different types of video games such as
first-person shooters, arcade games, and real-time strategy games. We analyze
the unique requirements that different game genres pose to a deep learning
system and highlight important open challenges in the context of applying these
machine learning methods to video games, such as general game playing, dealing
with extremely large decision spaces and sparse rewards
An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks
We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits
A modular architecture for transparent computation in recurrent neural networks
publisher: Elsevier articletitle: A modular architecture for transparent computation in recurrent neural networks journaltitle: Neural Networks articlelink: http://dx.doi.org/10.1016/j.neunet.2016.09.001 content_type: article copyright: © 2016 Elsevier Ltd. All rights reserved
Dimensions of Timescales in Neuromorphic Computing Systems
This article is a public deliverable of the EU project "Memory technologies
with multi-scale time constants for neuromorphic architectures" (MeMScales,
https://memscales.eu, Call ICT-06-2019 Unconventional Nanoelectronics, project
number 871371). This arXiv version is a verbatim copy of the deliverable
report, with administrative information stripped. It collects a wide and varied
assortment of phenomena, models, research themes and algorithmic techniques
that are connected with timescale phenomena in the fields of computational
neuroscience, mathematics, machine learning and computer science, with a bias
toward aspects that are relevant for neuromorphic engineering. It turns out
that this theme is very rich indeed and spreads out in many directions which
defy a unified treatment. We collected several dozens of sub-themes, each of
which has been investigated in specialized settings (in the neurosciences,
mathematics, computer science and machine learning) and has been documented in
its own body of literature. The more we dived into this diversity, the more it
became clear that our first effort to compose a survey must remain sketchy and
partial. We conclude with a list of insights distilled from this survey which
give general guidelines for the design of future neuromorphic systems
Neuroevolution in Games: State of the Art and Open Challenges
This paper surveys research on applying neuroevolution (NE) to games. In
neuroevolution, artificial neural networks are trained through evolutionary
algorithms, taking inspiration from the way biological brains evolved. We
analyse the application of NE in games along five different axes, which are the
role NE is chosen to play in a game, the different types of neural networks
used, the way these networks are evolved, how the fitness is determined and
what type of input the network receives. The article also highlights important
open research challenges in the field.Comment: - Added more references - Corrected typos - Added an overview table
(Table 1
- …