9 research outputs found

    Neural Computing in Quaternion Algebra

    Get PDF
    兵庫県立大学201

    Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos

    Full text link
    In neural information processing, an input modulates neural dynamics to generate a desired output. To unravel the dynamics and underlying neural connectivity enabling such input-output association, we proposed an exactly soluble neural-network model with a connectivity matrix explicitly consisting of inputs and required outputs. An analytic form of the response upon the input is derived, whereas three distinctive types of responses including chaotic dynamics as bifurcation against input strength are obtained depending on the neural sensitivity and number of inputs. Optimal performance is achieved at the onset of chaos, and the relevance of the results to cognitive dynamics is discussed

    Visual hallucinations in dementia with Lewy bodies originate from necrosis of characteristic neurons and connections in three-module perception model

    Get PDF
    Mathematical and computational approaches were used to investigate dementia with Lewy bodies (DLB), in which recurrent complex visual hallucinations (RCVH) is a very characteristic symptom. Beginning with interpretative analyses of pathological symptoms of patients with RCVH-DLB in comparison with the veridical perceptions of normal subjects, we constructed a three-module scenario concerning function giving rise to perception. The three modules were the visual input module, the memory module, and the perceiving module. Each module interacts with the others, and veridical perceptions were regarded as a certain convergence to one of the perceiving attractors sustained by self-consistent collective fields among the modules. Once a rather large but inhomogeneously distributed area of necrotic neurons and dysfunctional synaptic connections developed due to network disease, causing irreversible damage, then bottom-up information from the input module to both the memory and perceiving modules were severely impaired. These changes made the collective fields unstable and caused transient emergence of mismatched perceiving attractors. This may account for the reason why DLB patients see things that are not there. With the use of our computational model and experiments, the scenario was recreated with complex bifurcation phenomena associated with the destabilization of collective field dynamics in very high-dimensional state space

    Electrophysiological evidence for memory schemas in the rat hippocampus

    Full text link
    According to Piaget and Bartlett, learning involves both assimilation of new memories into networks of preexisting knowledge and alteration of existing networks to accommodate new information into existing schemas. Recent evidence suggests that the hippocampus integrates related memories into schemas that link representations of separately acquired experiences. In this thesis, I first review models for how memories of individual experiences become consolidated into the structure of world knowledge. Disruption of consolidated memories can occur during related learning, which suggests that consolidation of new information is the reconsolidation of related memories. The accepted role of the hippocampus during memory consolidation and reconsolidation suggests that it is also involved in modifying appropriate schemas during learning. To study schema development, I trained rats to retrieve rewards at different loci on a maze while recording hippocampal calls. About a quarter of cells were active at multiple goal sites, though the ensemble as a whole distinguished goal loci from one another. When new goals were introduced, cells that had been active at old goal locations began firing at the new locations. This initial generalization decreased in the days after learning. Learning also caused changes in firing patterns at well-learned goal locations. These results suggest that learning was supported by modification of an active schema of spatially related reward loci. In another experiment, I extended these findings to explore a schema of object and place associations. Ensemble activity was influenced by a hierarchy of task dimensions which included: experimental context, rat's spatial location, the reward potential and the identity of sampled objects. As rats learned about new objects, the cells that had previously fired for particular object-place conjunctions generalized their firing patterns to new conjunctions that similarly predicted reward. In both experiments, I observed highly structured representations for a set of related experiences. This organization of hippocampal activity counters key assumptions in standard models of hippocampal function that predict relative independence between memory traces. Instead, these findings reveal neural mechanisms for how the hippocampus develops a relational organization of memories that could support novel, inferential judgments between indirectly related events

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3 - Meeting Abstracts - Antwerp, Belgium. 15–20 July 2017

    Get PDF
    This work was produced as part of the activities of FAPESP Research,\ud Disseminations and Innovation Center for Neuromathematics (grant\ud 2013/07699-0, S. Paulo Research Foundation). NLK is supported by a\ud FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially\ud supported by a CNPq fellowship (grant 306251/2014-0)

    Structure, Dynamics and Self-Organization in Recurrent Neural Networks: From Machine Learning to Theoretical Neuroscience

    Get PDF
    At a first glance, artificial neural networks, with engineered learning algorithms and carefully chosen nonlinearities, are nothing like the complicated self-organized spiking neural networks studied by theoretical neuroscientists. Yet, both adapt to their inputs, keep information from the past in their state space and are able of learning, implying that some information processing principles should be common to both. In this thesis we study those principles by incorporating notions of systems theory, statistical physics and graph theory into artificial neural networks and theoretical neuroscience models. % TO DO: What is different in this thesis? -> classical signal processing with complex systems on top The starting point for this thesis is \ac{RC}, a learning paradigm used both in machine learning\cite{jaeger2004harnessing} and in theoretical neuroscience\cite{maass2002real}. A neural network in \ac{RC} consists of two parts, a reservoir – a directed and weighted network of neurons that projects the input time series onto a high dimensional space – and a readout which is trained to read the state of the neurons in the reservoir and combine them linearly to give the desired output. In classical \ac{RC}, the reservoir is randomly initialized and left untrained, which alleviates the training costs in comparison to other recurrent neural networks. However, this lack of training implies that reservoirs are not adapted to specific tasks and thus their performance is often lower than that of other neural networks. Our contribution has been to show how knowledge about a task can be integrated into the reservoir architecture, so that reservoirs can be tailored to specific problems without training. We do this design by identifying two features that are useful for machine learning: the memory of the reservoir and its power spectra. First we show that the correlations between neurons limit the capacity of the reservoir to retain traces of previous inputs, and demonstrate that those correlations are controlled by moduli of the eigenvalues of the adjacency matrix of the reservoir. Second, we prove that when the reservoir resonates at the frequencies that are present on the desired output signal, the performance of the readout increases. Knowing the features of the reservoir dynamics that we need, the next question is how to impose them. The simplest way to design a network with that resonates at a certain frequency is by adding cycles, which act as feedback loops, but this also induces correlations and hence memory modifications. To disentangle the frequencies and the memory design, we studied how the addition of cycles modifies the eigenvalues in the adjacency matrix of the network. Surprisingly, the shape of the eigenvalues is quite beautiful \cite{aceituno2019universal} and can be characterized using random matrix theory tools. Combining this knowledge with our result relating eigenvalues and correlations, we designed an heuristic that tailors reservoirs to specific tasks and showed that it improves upon state of the art \ac{RC} in three different machine learning tasks. Although this idea works in the machine learning version of \ac{RC}, there is one fundamental problem when we try to translate to the world of theoretical neuroscience: the proposed frequency adaptation requires prior knowledge of the task, which might not be plausible in a biological neural network. Therefore the following questions are whether those resonances can emerge by unsupervised learning, and which kind of learning rules would be required. Remarkably, these resonances can be induced by the well-known Spike Time-Dependent Plasticity (STDP) combined with homeostatic mechanisms. We show this by deriving two self-consistent equations: one where the activity of every neuron can be calculated from its synaptic weights and its external inputs and a second one where the synaptic weights can be obtained from the neural activity. By considering spatio-temporal symmetries in our inputs we obtained two families of solutions to those equations where a periodic input is enhanced by the neural network after STDP. This approach shows that periodic and quasiperiodic inputs can induce resonances that agree with the aforementioned \ac{RC} theory. Those results, although rigorous, are expressed on a language of statistical physics and cannot be easily tested or verified in real, scarce data. To make them more accessible to the neuroscience community we showed that latency reduction, a well-known effect of STDP\cite{song2000competitive} which has been experimentally observed \cite{mehta2000experience}, generates neural codes that agree with the self-consistency equations and their solutions. In particular, this analysis shows that metabolic efficiency, synchronization and predictions can emerge from that same phenomena of latency reduction, thus closing the loop with our original machine learning problem. To summarize, this thesis exposes principles of learning recurrent neural networks that are consistent with adaptation in the nervous system and also improve current machine learning methods. This is done by leveraging features of the dynamics of recurrent neural networks such as resonances and correlations in machine learning problems, then imposing the required dynamics into reservoir computing through control theory notions such as feedback loops and spectral analysis. Then we assessed the plausibility of such adaptation in biological networks, deriving solutions from self-organizing processes that are biologically plausible and align with the machine learning prescriptions. Finally, we relate those processes to learning rules in biological neurons, showing how small local adaptations of the spike times can lead to neural codes that are efficient and can be interpreted in machine learning terms

    Short-Term Plasticity at the Schaffer Collateral: A New Model with Implications for Hippocampal Processing

    Get PDF
    A new mathematical model of short-term synaptic plasticity (STP) at the Schaffer collateral is introduced. Like other models of STP, the new model relates short-term synaptic plasticity to an interaction between facilitative and depressive dynamic influences. Unlike previous models, the new model successfully simulates facilitative and depressive dynamics within the framework of the synaptic vesicle cycle. The novelty of the model lies in the description of a competitive interaction between calcium-sensitive proteins for binding sites on the vesicle release machinery. By attributing specific molecular causes to observable presynaptic effects, the new model of STP can predict the effects of specific alterations to the presynaptic neurotransmitter release mechanism. This understanding will guide further experiments into presynaptic functionality, and may contribute insights into the development of pharmaceuticals that target illnesses manifesting aberrant synaptic dynamics, such as Fragile-X syndrome and schizophrenia. The new model of STP will also add realism to brain circuit models that simulate cognitive processes such as attention and memory. The hippocampal processing loop is an example of a brain circuit involved in memory formation. The hippocampus filters and organizes large amounts of spatio-temporal data in real time according to contextual significance. The role of synaptic dynamics in the hippocampal system is speculated to help keep the system close to a region of instability that increases encoding capacity and discriminating capability. In particular, synaptic dynamics at the Schaffer collateral are proposed to coordinate the output of the highly dynamic CA3 region of the hippocampus with the phase-code in the CA1 that modulates communication between the hippocampus and the neocortex

    Ship steering control using feedforward neural networks

    Get PDF
    One significant problem in the design of ship steering control systems is that the dynamics of the vessel change with operating conditions such as the forward speed of the vessel, the depth of the water and loading conditions etc. Approaches considered in the past to overcome these difficulties include the use of self adaptive control systems which adjust the control characteristics on a continuous basis to suit the current operating conditions. Artificial neural networks have been receiving considerable attention in recent years and have been considered for a variety of applications where the characteristics of the controlled system change significantly with operating conditions or with time. Such networks have a configuration which remains fixed once the training phase is complete. The resulting controlled systems thus have more predictable characteristics than those which are found in many forms of traditional self-adaptive control systems. In particular, stability bounds can be investigated through simulation studies as with any other form of controller having fixed characteristics. Feedforward neural networks have enjoyed many successful applications in the field of systems and control. These networks include two major categories: multilayer perceptrons and radial basis function networks. In this thesis, we explore the applicability of both of these artificial neural network architectures for automatic steering of ships in a course changing mode of operation. The approach that has been adopted involves the training of a single artificial neural network to represent a series of conventional controllers for different operating conditions. The resulting network thus captures, in a nonlinear fashion, the essential characteristics of all of the conventional controllers. Most of the artificial neural network controllers developed in this thesis are trained with the data generated through simulation studies. However, experience is also gained of developing a neuro controller on the basis of real data gathered from an actual scale model of a supply ship. Another important aspect of this work is the applicability of local model networks for modelling the dynamics of a ship. Local model networks can be regarded as a generalized form of radial basis function networks and have already proved their worth in a number of applications involving the modelling of systems in which the dynamic characteristics can vary significantly with the system operating conditions. The work presented in this thesis indicates that these networks are highly suitable for modelling the dynamics of a ship

    Abstracts on Radio Direction Finding (1899 - 1995)

    Get PDF
    The files on this record represent the various databases that originally composed the CD-ROM issue of "Abstracts on Radio Direction Finding" database, which is now part of the Dudley Knox Library's Abstracts and Selected Full Text Documents on Radio Direction Finding (1899 - 1995) Collection. (See Calhoun record https://calhoun.nps.edu/handle/10945/57364 for further information on this collection and the bibliography). Due to issues of technological obsolescence preventing current and future audiences from accessing the bibliography, DKL exported and converted into the three files on this record the various databases contained in the CD-ROM. The contents of these files are: 1) RDFA_CompleteBibliography_xls.zip [RDFA_CompleteBibliography.xls: Metadata for the complete bibliography, in Excel 97-2003 Workbook format; RDFA_Glossary.xls: Glossary of terms, in Excel 97-2003 Workbookformat; RDFA_Biographies.xls: Biographies of leading figures, in Excel 97-2003 Workbook format]; 2) RDFA_CompleteBibliography_csv.zip [RDFA_CompleteBibliography.TXT: Metadata for the complete bibliography, in CSV format; RDFA_Glossary.TXT: Glossary of terms, in CSV format; RDFA_Biographies.TXT: Biographies of leading figures, in CSV format]; 3) RDFA_CompleteBibliography.pdf: A human readable display of the bibliographic data, as a means of double-checking any possible deviations due to conversion
    corecore