3,920 research outputs found
Empirical Analysis of the Necessary and Sufficient Conditions of the Echo State Property
The Echo State Network (ESN) is a specific recurrent network, which has
gained popularity during the last years. The model has a recurrent network
named reservoir, that is fixed during the learning process. The reservoir is
used for transforming the input space in a larger space. A fundamental property
that provokes an impact on the model accuracy is the Echo State Property (ESP).
There are two main theoretical results related to the ESP. First, a sufficient
condition for the ESP existence that involves the singular values of the
reservoir matrix. Second, a necessary condition for the ESP. The ESP can be
violated according to the spectral radius value of the reservoir matrix. There
is a theoretical gap between these necessary and sufficient conditions. This
article presents an empirical analysis of the accuracy and the projections of
reservoirs that satisfy this theoretical gap. It gives some insights about the
generation of the reservoir matrix. From previous works, it is already known
that the optimal accuracy is obtained near to the border of stability control
of the dynamics. Then, according to our empirical results, we can see that this
border seems to be closer to the sufficient conditions than to the necessary
conditions of the ESP.Comment: 23 pages, 14 figures, accepted paper for the IEEE IJCNN, 201
Computational physics of the mind
In the XIX century and earlier such physicists as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of mind. In this paper several approaches relevant to modeling of mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From computational point of view realistic models require massively parallel architectures
Grid Cell Hexagonal Patterns Formed by Fast Self-Organized Learning within Entorhinal Cortex
Grid cells in the dorsal segment of the medial entorhinal cortex (dMEC) show remarkable hexagonal activity patterns, at multiple spatial scales, during spatial navigation. How these hexagonal patterns arise has excited intense interest. It has previously been shown how a selforganizing map can convert firing patterns across entorhinal grid cells into hippocampal place cells that are capable of representing much larger spatial scales. Can grid cell firing fields also arise during navigation through learning within a self-organizing map? A neural model is proposed that converts path integration signals into hexagonal grid cell patterns of multiple scales. This GRID model creates only grid cell patterns with the observed hexagonal structure, predicts how these hexagonal patterns can be learned from experience, and can process biologically plausible neural input and output signals during navigation. These results support a unified computational framework for explaining how entorhinal-hippocampal interactions support spatial navigation.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR00ll-09-3-0001, HR0011-09-C-0011
Optimisation in ‘Self-modelling’ Complex Adaptive Systems
When a dynamical system with multiple point attractors is released from an arbitrary initial condition it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimises these constraints by this method is unlikely, or may take many attempts. Here we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimise total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely ‘recalling’ low energy states that have been previously visited but ‘predicting’ their location by generalising over local attractor states that have already been visited. This ‘self-modelling’ framework, i.e. a system that augments its behaviour with an associative memory of its own attractors, helps us better-understand the conditions under which a simple locally-mediated mechanism of self-organisation can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph colouring and distributed task allocation problems
The Parameter-Less Self-Organizing Map algorithm
The Parameter-Less Self-Organizing Map (PLSOM) is a new neural network
algorithm based on the Self-Organizing Map (SOM). It eliminates the need for a
learning rate and annealing schemes for learning rate and neighbourhood size.
We discuss the relative performance of the PLSOM and the SOM and demonstrate
some tasks in which the SOM fails but the PLSOM performs satisfactory. Finally
we discuss some example applications of the PLSOM and present a proof of
ordering under certain limited conditions.Comment: 29 pages, 27 figures. Based on publication in IEEE Trans. on Neural
Network
Computational neural learning formalisms for manipulator inverse kinematics
An efficient, adaptive neural learning paradigm for addressing the inverse kinematics of redundant manipulators is presented. The proposed methodology exploits the infinite local stability of terminal attractors - a new class of mathematical constructs which provide unique information processing capabilities to artificial neural systems. For robotic applications, synaptic elements of such networks can rapidly acquire the kinematic invariances embedded within the presented samples. Subsequently, joint-space configurations, required to follow arbitrary end-effector trajectories, can readily be computed. In a significant departure from prior neuromorphic learning algorithms, this methodology provides mechanisms for incorporating an in-training skew to handle kinematics and environmental constraints
- …