172 research outputs found

    Computational physics of the mind

    Get PDF
    In the XIX century and earlier such physicists as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of mind. In this paper several approaches relevant to modeling of mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From computational point of view realistic models require massively parallel architectures

    Invariant set of weight of perceptron trained by perceptron training algorithm

    Get PDF
    In this paper, an invariant set of the weight of the perceptron trained by the perceptron training algorithm is defined and characterized. The dynamic range of the steady state values of the weight of the perceptron can be evaluated via finding the dynamic range of the weight of the perceptron inside the largest invariant set. Also, the necessary and sufficient condition for the forward dynamics of the weight of the perceptron to be injective as well as the condition for the invariant set of the weight of the perceptron to be attractive is derived

    Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron

    Full text link
    In this article the framework for Parisi's spontaneous replica symmetry breaking is reviewed, and subsequently applied to the example of the statistical mechanical description of the storage properties of a McCulloch-Pitts neuron. The technical details are reviewed extensively, with regard to the wide range of systems where the method may be applied. Parisi's partial differential equation and related differential equations are discussed, and a Green function technique introduced for the calculation of replica averages, the key to determining the averages of physical quantities. The ensuing graph rules involve only tree graphs, as appropriate for a mean-field-like model. The lowest order Ward-Takahashi identity is recovered analytically and is shown to lead to the Goldstone modes in continuous replica symmetry breaking phases. The need for a replica symmetry breaking theory in the storage problem of the neuron has arisen due to the thermodynamical instability of formerly given solutions. Variational forms for the neuron's free energy are derived in terms of the order parameter function x(q), for different prior distribution of synapses. Analytically in the high temperature limit and numerically in generic cases various phases are identified, among them one similar to the Parisi phase in the Sherrington-Kirkpatrick model. Extensive quantities like the error per pattern change slightly with respect to the known unstable solutions, but there is a significant difference in the distribution of non-extensive quantities like the synaptic overlaps and the pattern storage stability parameter. A simulation result is also reviewed and compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi, eepic), accepted for Physics Report

    Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron

    Full text link
    In this article the framework for Parisi's spontaneous replica symmetry breaking is reviewed, and subsequently applied to the example of the statistical mechanical description of the storage properties of a McCulloch-Pitts neuron. The technical details are reviewed extensively, with regard to the wide range of systems where the method may be applied. Parisi's partial differential equation and related differential equations are discussed, and a Green function technique introduced for the calculation of replica averages, the key to determining the averages of physical quantities. The ensuing graph rules involve only tree graphs, as appropriate for a mean-field-like model. The lowest order Ward-Takahashi identity is recovered analytically and is shown to lead to the Goldstone modes in continuous replica symmetry breaking phases. The need for a replica symmetry breaking theory in the storage problem of the neuron has arisen due to the thermodynamical instability of formerly given solutions. Variational forms for the neuron's free energy are derived in terms of the order parameter function x(q), for different prior distribution of synapses. Analytically in the high temperature limit and numerically in generic cases various phases are identified, among them one similar to the Parisi phase in the Sherrington-Kirkpatrick model. Extensive quantities like the error per pattern change slightly with respect to the known unstable solutions, but there is a significant difference in the distribution of non-extensive quantities like the synaptic overlaps and the pattern storage stability parameter. A simulation result is also reviewed and compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi, eepic), accepted for Physics Report

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Statistical physics of neural systems

    Get PDF
    The ability of processing and storing information is considered a characteristic trait of intelligent systems. In biological neural networks, learning is strongly believed to take place at the synaptic level, in terms of modulation of synaptic efficacy. It can be thus interpreted as the expression of a collective phenomena, emerging when neurons connect each other in constituting a complex network of interactions. In this work, we represent learning as an optimization problem, actually implementing a local search, in the synaptic space, of specific configurations, known as solutions and making a neural network able to accomplish a series of different tasks. For instance, we would like the network to adapt the strength of its synaptic connections, in order to be capable of classifying a series of objects, by assigning to each object its corresponding class-label. Supported by a series of experiments, it has been suggested that synapses may exploit a very few number of synaptic states for encoding information. It is known that this feature makes learning in neural networks a challenging task. Extending the large deviation analysis performed in the extreme case of binary synaptic couplings, in this work, we prove the existence of regions of the phase space, where solutions are organized in extremely dense clusters. This picture turns out to be invariant to the tuning of all the parameters of the model. Solutions within the clusters are more robust to noise, thus enhancing the learning performances. This has inspired the design of new learning algorithms, as well as it has clarified the effectiveness of the previously proposed ones. We further provide quantitative evidence that the gain achievable when considering a greater number of available synaptic states for encoding information, is consistent only up to a very few number of bits. This is in line with the above mentioned experimental results. Besides the challenging aspect of low precision synaptic connections, it is also known that the neuronal environment is extremely noisy. Whether stochasticity can enhance or worsen the learning performances is currently matter of debate. In this work, we consider a neural network model where the synaptic connections are random variables, sampled according to a parametrized probability distribution. We prove that, this source of stochasticity naturally drives towards regions of the phase space at high densities of solutions. These regions are directly accessible by means of gradient descent strategies, over the parameters of the synaptic couplings distribution. We further set up a statistical physics analysis, through which we show that solutions in the dense regions are characterized by robustness and good generalization performances. Stochastic neural networks are also capable of building abstract representations of input stimuli and then generating new input samples, according to the inferred statistics of the input signal. In this regard, we propose a new learning rule, called Delayed Correlation Matching (DCM), that relying on the matching between time-delayed activity correlations, makes a neural network able to store patterns of neuronal activity. When considering hidden neuronal states, the DCM learning rule is also able to train Restricted Boltzmann Machines as generative models. In this work, we further require the DCM learning rule to fulfil some biological constraints, such as locality, sparseness of the neural coding and the Dale’s principle. While retaining all these biological requirements, the DCM learning rule has shown to be effective for different network topologies, and in both on-line learning regimes and presence of correlated patterns. We further show that it is also able to prevent the creation of spurious attractor states

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given
    • …
    corecore