4,672 research outputs found

    A Theory of Cortical Neural Processing.

    Get PDF
    This dissertation puts forth an original theory of cortical neural processing that is unique in its view of the interplay of chaotic and stable oscillatory neurodynamics and is meant to stimulate new ideas in artificial neural network modeling. Our theory is the first to suggest two new purposes for chaotic neurodynamics: (i) as a natural means of representing the uncertainty in the outcome of performed tasks, such as memory retrieval or classification, and (ii) as an automatic way of producing an economic representation of distributed information. We developed new models, to better understand how the cerebral cortex processes information, which led to our theory. Common to these models is a neuron interaction function that alternates between excitatory and inhibitory neighborhoods. Our theory allows characteristics of the input environment to influence the structural development of the cortex. We view low intensity chaotic activity as the a priori uncertain base condition of the cortex, resulting from the interaction of a multitude of stronger potential responses. Data, distinguishing one response from many others, drives bifurcations back toward the direction of less complex (stable) behavior. Stability appears as temporary bubble-like clusters within the boundaries of cortical columns and begins to propagate through frequency sensitive and non-specific neurons. But this is limited by destabilizing long-path connections. An original model of the post-natal development of ocular dominance columns in the striate cortex is presented and compared to autoradiographic images from the literature with good matching results. Finally, experiments are shown to favor computed update order over traditional approaches for better performance of the pattern completion process

    Nonlinear dynamics of pattern recognition and optimization

    Get PDF
    We associate learning in living systems with the shaping of the velocity vector field of a dynamical system in response to external, generally random, stimuli. We consider various approaches to implement a system that is able to adapt the whole vector field, rather than just parts of it - a drawback of the most common current learning systems: artificial neural networks. This leads us to propose the mathematical concept of self-shaping dynamical systems. To begin, there is an empty phase space with no attractors, and thus a zero velocity vector field. Upon receiving the random stimulus, the vector field deforms and eventually becomes smooth and deterministic, despite the random nature of the applied force, while the phase space develops various geometrical objects. We consider the simplest of these - gradient self-shaping systems, whose vector field is the gradient of some energy function, which under certain conditions develops into the multi-dimensional probability density distribution of the input. We explain how self-shaping systems are relevant to artificial neural networks. Firstly, we show that they can potentially perform pattern recognition tasks typically implemented by Hopfield neural networks, but without any supervision and on-line, and without developing spurious minima in the phase space. Secondly, they can reconstruct the probability density distribution of input signals, like probabilistic neural networks, but without the need for new training patterns to have to enter the network as new hardware units. We therefore regard self-shaping systems as a generalisation of the neural network concept, achieved by abandoning the "rigid units - flexible couplings'' paradigm and making the vector field fully flexible and amenable to external force. It is not clear how such systems could be implemented in hardware, and so this new concept presents an engineering challenge. It could also become an alternative paradigm for the modelling of both living and learning systems. Mathematically it is interesting to find how a self shaping system could develop non-trivial objects in the phase space such as periodic orbits or chaotic attractors. We investigate how a delayed vector field could form such objects. We show that this method produces chaos in a class systems which have very simple dynamics in the non-delayed case. We also demonstrate the coexistence of bounded and unbounded solutions dependent on the initial conditions and the value of the delay. Finally, we speculate about how such a method could be used in global optimization

    Analysis of observed chaotic data

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2004Includes bibliographical references (leaves: 86)Text in English; Abstract: Turkish and Englishxii, 89 leavesIn this thesis, analysis of observed chaotic data has been investigated. The purpose of analyzing time series is to make a classification between the signals observed from dynamical systems. The classifiers are the invariants related to the dynamics. The correlation dimension has been used as classifier which has been obtained after phase space reconstruction. Therefore, necessary methods to find the phase space parameters which are time delay and the embedding dimension have been offered. Since observed time series practically are contaminated by noise, the invariants of dynamical system can not be reached without noise reduction. The noise reduction has been performed by the new proposed singular value decomposition based rank estimation method.Another classification has been realized by analyzing time-frequency characteristics of the signals. The time-frequency distribution has been investigated by wavelet transform since it supplies flexible time-frequency window. Classification in wavelet domain has been performed by wavelet entropy which is expressed by the sum of relative wavelet energies specified in certain frequency bands. Another wavelet based classification has been done by using the wavelet ridges where the energy is relatively maximum in time-frequency domain. These new proposed analysis methods have been applied to electrical signals taken from healthy human brains and the results have been compared with other studies

    A geometrical analysis of global stability in trained feedback networks

    Get PDF
    Recurrent neural networks have been extensively studied in the context of neuroscience and machine learning due to their ability to implement complex computations. While substantial progress in designing effective learning algorithms has been achieved in the last years, a full understanding of trained recurrent networks is still lacking. Specifically, the mechanisms that allow computations to emerge from the underlying recurrent dynamics are largely unknown. Here we focus on a simple, yet underexplored computational setup: a feedback architecture trained to associate a stationary output to a stationary input. As a starting point, we derive an approximate analytical description of global dynamics in trained networks which assumes uncorrelated connectivity weights in the feedback and in the random bulk. The resulting mean-field theory suggests that the task admits several classes of solutions, which imply different stability properties. Different classes are characterized in terms of the geometrical arrangement of the readout with respect to the input vectors, defined in the high-dimensional space spanned by the network population. We find that such approximate theoretical approach can be used to understand how standard training techniques implement the input-output task in finite-size feedback networks. In particular, our simplified description captures the local and the global stability properties of the target solution, and thus predicts training performance

    Power-law statistics and universal scaling in the absence of criticality

    Full text link
    Critical states are sometimes identified experimentally through power-law statistics or universal scaling functions. We show here that such features naturally emerge from networks in self-sustained irregular regimes away from criticality. In these regimes, statistical physics theory of large interacting systems predict a regime where the nodes have independent and identically distributed dynamics. We thus investigated the statistics of a system in which units are replaced by independent stochastic surrogates, and found the same power-law statistics, indicating that these are not sufficient to establish criticality. We rather suggest that these are universal features of large-scale networks when considered macroscopically. These results put caution on the interpretation of scaling laws found in nature.Comment: in press in Phys. Rev.
    • …
    corecore