230 research outputs found
Cerebellar models of associative memory: Three papers from IEEE COMPCON spring 1989
Three papers are presented on the following topics: (1) a cerebellar-model associative memory as a generalized random-access memory; (2) theories of the cerebellum - two early models of associative memory; and (3) intelligent network management and functional cerebellum synthesis
The Performance of Associative Memory Models with Biologically Inspired Connectivity
This thesis is concerned with one important question in artificial neural networks, that is, how biologically inspired connectivity of a network affects its associative memory performance.
In recent years, research on the mammalian cerebral cortex, which has the main
responsibility for the associative memory function in the brains, suggests that
the connectivity of this cortical network is far from fully connected, which is
commonly assumed in traditional associative memory models. It is found to
be a sparse network with interesting connectivity characteristics such as the
“small world network” characteristics, represented by short Mean Path Length,
high Clustering Coefficient, and high Global and Local Efficiency. Most of the networks in this thesis are therefore sparsely connected.
There is, however, no conclusive evidence of how these different connectivity
characteristics affect the associative memory performance of a network. This
thesis addresses this question using networks with different types of
connectivity, which are inspired from biological evidences.
The findings of this programme are unexpected and important. Results show
that the performance of a non-spiking associative memory model is found to be
predicted by its linear correlation with the Clustering Coefficient of the network,
regardless of the detailed connectivity patterns. This is particularly important
because the Clustering Coefficient is a static measure of one aspect of
connectivity, whilst the associative memory performance reflects the result of a
complex dynamic process.
On the other hand, this research reveals that improvements in the performance
of a network do not necessarily directly rely on an increase in the network’s
wiring cost. Therefore it is possible to construct networks with high
associative memory performance but relatively low wiring cost. Particularly,
Gaussian distributed connectivity in a network is found to achieve the best
performance with the lowest wiring cost, in all examined connectivity models.
Our results from this programme also suggest that a modular network with an
appropriate configuration of Gaussian distributed connectivity, both internal to
each module and across modules, can perform nearly as well as the Gaussian
distributed non-modular network.
Finally, a comparison between non-spiking and spiking associative memory
models suggests that in terms of associative memory performance, the
implication of connectivity seems to transcend the details of the actual neural
models, that is, whether they are spiking or non-spiking neurons
Analog Spiking Neuromorphic Circuits and Systems for Brain- and Nanotechnology-Inspired Cognitive Computing
Human society is now facing grand challenges to satisfy the growing demand for computing power, at the same time, sustain energy consumption. By the end of CMOS technology scaling, innovations are required to tackle the challenges in a radically different way. Inspired by the emerging understanding of the computing occurring in a brain and nanotechnology-enabled biological plausible synaptic plasticity, neuromorphic computing architectures are being investigated. Such a neuromorphic chip that combines CMOS analog spiking neurons and nanoscale resistive random-access memory (RRAM) using as electronics synapses can provide massive neural network parallelism, high density and online learning capability, and hence, paves the path towards a promising solution to future energy-efficient real-time computing systems. However, existing silicon neuron approaches are designed to faithfully reproduce biological neuron dynamics, and hence they are incompatible with the RRAM synapses, or require extensive peripheral circuitry to modulate a synapse, and are thus deficient in learning capability. As a result, they eliminate most of the density advantages gained by the adoption of nanoscale devices, and fail to realize a functional computing system.
This dissertation describes novel hardware architectures and neuron circuit designs that synergistically assemble the fundamental and significant elements for brain-inspired computing. Versatile CMOS spiking neurons that combine integrate-and-fire, passive dense RRAM synapses drive capability, dynamic biasing for adaptive power consumption, in situ spike-timing dependent plasticity (STDP) and competitive learning in compact integrated circuit modules are presented. Real-world pattern learning and recognition tasks using the proposed architecture were demonstrated with circuit-level simulations. A test chip was implemented and fabricated to verify the proposed CMOS neuron and hardware architecture, and the subsequent chip measurement results successfully proved the idea.
The work described in this dissertation realizes a key building block for large-scale integration of spiking neural network hardware, and then, serves as a step-stone for the building of next-generation energy-efficient brain-inspired cognitive computing systems
Modeling Quantum Mechanical Observers via Neural-Glial Networks
We investigate the theory of observers in the quantum mechanical world by
using a novel model of the human brain which incorporates the glial network
into the Hopfield model of the neural network. Our model is based on a
microscopic construction of a quantum Hamiltonian of the synaptic junctions.
Using the Eguchi-Kawai large N reduction, we show that, when the number of
neurons and astrocytes is exponentially large, the degrees of freedom of the
dynamics of the neural and glial networks can be completely removed and,
consequently, that the retention time of the superposition of the wave
functions in the brain is as long as that of the microscopic quantum system of
pre-synaptics sites. Based on this model, the classical information entropy of
the neural-glial network is introduced. Using this quantity, we propose a
criterion for the brain to be a quantum mechanical observer.Comment: 24 pages, published versio
Synaptic configuration and reconfiguration in the neocortex are spatiotemporally selective
Brain computation relies on the neural networks. Neurons extend the neurites such as dendrites and axons, and the contacts of these neurites that form chemical synapses are the biological basis of signal transmissions in the central nervous system. Individual neuronal outputs can influence the other neurons within the range of the axonal spread, while the activities of single neurons can be affected by the afferents in their somatodendritic fields. The morphological profile, therefore, binds the functional role each neuron can play. In addition, synaptic connectivity among neurons displays preference based on the characteristics of presynaptic and postsynaptic neurons. Here, the author reviews the “spatial” and “temporal” connection selectivity in the neocortex. The histological description of the neocortical circuitry depends primarily on the classification of cell types, and the development of gene engineering techniques allows the cell type-specific visualization of dendrites and axons as well as somata. Using genetic labeling of particular cell populations combined with immunohistochemistry and imaging at a subcellular spatial resolution, we revealed the “spatial selectivity” of cortical wirings in which synapses are non-uniformly distributed on the subcellular somatodendritic domains in a presynaptic cell type-specific manner. In addition, cortical synaptic dynamics in learning exhibit presynaptic cell type-dependent “temporal selectivity”: corticocortical synapses appear only transiently during the learning phase, while learning-induced new thalamocortical synapses persist, indicating that distinct circuits may supervise learning-specific ephemeral synapse and memory-specific immortal synapse formation. The selectivity of spatial configuration and temporal reconfiguration in the neural circuitry may govern diverse functions in the neocortex.The version of record of this article, first published in Anatomical Science International, is available online at Publisher’s website: https://doi.org/10.1007/s12565-023-00743-
Memristors for the Curious Outsiders
We present both an overview and a perspective of recent experimental advances
and proposed new approaches to performing computation using memristors. A
memristor is a 2-terminal passive component with a dynamic resistance depending
on an internal parameter. We provide an brief historical introduction, as well
as an overview over the physical mechanism that lead to memristive behavior.
This review is meant to guide nonpractitioners in the field of memristive
circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page
- …