229 research outputs found
Neuroengineering of Clustering Algorithms
Cluster analysis can be broadly divided into multivariate data visualization, clustering algorithms, and cluster validation. This dissertation contributes neural network-based techniques to perform all three unsupervised learning tasks. Particularly, the first paper provides a comprehensive review on adaptive resonance theory (ART) models for engineering applications and provides context for the four subsequent papers. These papers are devoted to enhancements of ART-based clustering algorithms from (a) a practical perspective by exploiting the visual assessment of cluster tendency (VAT) sorting algorithm as a preprocessor for ART offline training, thus mitigating ordering effects; and (b) an engineering perspective by designing a family of multi-criteria ART models: dual vigilance fuzzy ART and distributed dual vigilance fuzzy ART (both of which are capable of detecting complex cluster structures), merge ART (aggregates partitions and lessens ordering effects in online learning), and cluster validity index vigilance in fuzzy ART (features a robust vigilance parameter selection and alleviates ordering effects in offline learning). The sixth paper consists of enhancements to data visualization using self-organizing maps (SOMs) by depicting in the reduced dimension and topology-preserving SOM grid information-theoretic similarity measures between neighboring neurons. This visualization\u27s parameters are estimated using samples selected via a single-linkage procedure, thereby generating heatmaps that portray more homogeneous within-cluster similarities and crisper between-cluster boundaries. The seventh paper presents incremental cluster validity indices (iCVIs) realized by (a) incorporating existing formulations of online computations for clusters\u27 descriptors, or (b) modifying an existing ART-based model and incrementally updating local density counts between prototypes. Moreover, this last paper provides the first comprehensive comparison of iCVIs in the computational intelligence literature --Abstract, page iv
Recommended from our members
Synaptic plasticity and memory addressing in biological and artificial neural networks
Biological brains are composed of neurons, interconnected by synapses to create large complex networks. Learning and memory occur, in large part, due to synaptic plasticity -- modifications in the efficacy of information transmission through these synaptic connections. Artificial neural networks model these with neural "units" which communicate through synaptic weights. Models of learning and memory propose synaptic plasticity rules that describe and predict the weight modifications. An equally important but under-evaluated question is the selection of \textit{which} synapses should be updated in response to a memory event. In this work, we attempt to separate the questions of synaptic plasticity from that of memory addressing.
Chapter 1 provides an overview of the problem of memory addressing and a summary of the solutions that have been considered in computational neuroscience and artificial intelligence, as well as those that may exist in biology. Chapter 2 presents in detail a solution to memory addressing and synaptic plasticity in the context of familiarity detection, suggesting strong feedforward weights and anti-Hebbian plasticity as the respective mechanisms. Chapter 3 proposes a model of recall, with storage performed by addressing through local third factors and neo-Hebbian plasticity, and retrieval by content-based addressing. In Chapter 4, we consider the problem of concurrent memory consolidation and memorization. Both storage and retrieval are performed by content-based addressing, but the plasticity rule itself is implemented by gradient descent, modulated according to whether an item should be stored in a distributed manner or memorized verbatim. However, the classical method for computing gradients in recurrent neural networks, backpropagation through time, is generally considered unbiological. In Chapter 5 we suggest a more realistic implementation through an approximation of recurrent backpropagation.
Taken together, these results propose a number of potential mechanisms for memory storage and retrieval, each of which separates the mechanism of synaptic updating -- plasticity -- from that of synapse selection -- addressing. Explicit studies of memory addressing may find applications not only in artificial intelligence but also in biology. In artificial networks, for example, selectively updating memories in large language models can help improve user privacy and security. In biological ones, understanding memory addressing can help with health outcomes and treating memory-based illnesses such as Alzheimers or PTSD
NASA patent abstracts bibliography: A continuing bibliography. Section 1: Abstracts (supplement 42)
Abstracts are provided for 174 patents and patent applications entered into the NASA scientific and technical information system during the period July 1992 through December 1992. Each entry consists of a citation, an abstract, and in most cases, a key illustration selected from the patent or patent application
NASA SBIR abstracts of 1990 phase 1 projects
The research objectives of the 280 projects placed under contract in the National Aeronautics and Space Administration (NASA) 1990 Small Business Innovation Research (SBIR) Phase 1 program are described. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses in response to NASA's 1990 SBIR Phase 1 Program Solicitation. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 280, in order of its appearance in the body of the report. The document also includes Appendixes to provide additional information about the SBIR program and permit cross-reference in the 1990 Phase 1 projects by company name, location by state, principal investigator, NASA field center responsible for management of each project, and NASA contract number
Bio-mimetic Spiking Neural Networks for unsupervised clustering of spatio-temporal data
Spiking neural networks aspire to mimic the brain more closely than traditional artificial neural networks. They are characterised by a spike-like activation function inspired by the shape of an action potential in biological neurons. Spiking networks remain a niche area of research, perform worse than the traditional artificial networks, and their real-world applications are limited. We hypothesised that neuroscience-inspired spiking neural networks with spike-timing-dependent plasticity demonstrate useful learning capabilities. Our objective was to identify features which play a vital role in information processing in the brain but are not commonly used in artificial networks, implement them in spiking networks without copying constraints that apply to living organisms, and to characterise their effect on data processing. The networks we created are not brain models; our approach can be labelled as artificial life. We performed a literature review and selected features such as local weight updates, neuronal sub-types, modularity, homeostasis and structural plasticity. We used the review as a guide for developing the consecutive iterations of the network, and eventually a whole evolutionary developmental system. We analysed the model’s performance on clustering of spatio-temporal data. Our results show that combining evolution and unsupervised learning leads to a faster convergence on the optimal solutions, better stability of fit solutions than each approach separately. The choice of fitness definition affects the network’s performance on fitness-related and unrelated tasks. We found that neuron type-specific weight homeostasis can be used to stabilise the networks, thus enabling longer training. We also demonstrated that networks with a rudimentary architecture can evolve developmental rules which improve their fitness. This interdisciplinary work provides contributions to three fields: it proposes novel artificial intelligence approaches, tests the possible role of the selected biological phenomena in information processing in the brain, and explores the evolution of learning in an artificial life system
NASA SBIR abstracts of 1991 phase 1 projects
The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included
The Role of Synaptic Tagging and Capture for Memory Dynamics in Spiking Neural Networks
Memory serves to process and store information about experiences such that this information can be
used in future situations. The transfer from transient storage into long-term memory, which retains
information for hours, days, and even years, is called consolidation. In brains, information is primarily
stored via alteration of synapses, so-called synaptic plasticity. While these changes are at first in a
transient early phase, they can be transferred to a late phase, meaning that they become stabilized
over the course of several hours. This stabilization has been explained by so-called synaptic tagging
and capture (STC) mechanisms. To store and recall memory representations, emergent dynamics arise
from the synaptic structure of recurrent networks of neurons. This happens through so-called cell
assemblies, which feature particularly strong synapses. It has been proposed that the stabilization
of such cell assemblies by STC corresponds to so-called synaptic consolidation, which is observed in
humans and other animals in the first hours after acquiring a new memory. The exact connection
between the physiological mechanisms of STC and memory consolidation remains, however, unclear.
It is equally unknown which influence STC mechanisms exert on further cognitive functions that guide
behavior. On timescales of minutes to hours (that means, the timescales of STC) such functions include
memory improvement, modification of memories, interference and enhancement of similar memories,
and transient priming of certain memories. Thus, diverse memory dynamics may be linked to STC,
which can be investigated by employing theoretical methods based on experimental data from the
neuronal and the behavioral level.
In this thesis, we present a theoretical model of STC-based memory consolidation in recurrent networks of spiking neurons, which are particularly suited to reproduce biologically realistic dynamics.
Furthermore, we combine the STC mechanisms with calcium dynamics, which have been found to
guide the major processes of early-phase synaptic plasticity in vivo. In three included research articles as well as additional sections, we develop this model and investigate how it can account for a
variety of behavioral effects. We find that the model enables the robust implementation of the cognitive memory functions mentioned above. The main steps to this are: 1. demonstrating the formation, consolidation, and improvement of memories represented by cell assemblies, 2. showing that
neuromodulator-dependent STC can retroactively control whether information is stored in a temporal
or rate-based neural code, and 3. examining interaction of multiple cell assemblies with transient and
attractor dynamics in different organizational paradigms.
In summary, we demonstrate several ways by which STC controls the late-phase synaptic structure
of cell assemblies. Linking these structures to functional dynamics, we show that our STC-based model
implements functionality that can be related to long-term memory. Thereby, we provide a basis for the
mechanistic explanation of various neuropsychological effects.2021-09-0
Network analysis of the cellular circuits of memory
Intuitively, memory is conceived as a collection of static images that we accumulate as we experience the world. But actually, memories are constantly changing through our life, shaped by our ongoing experiences. Assimilating new knowledge without corrupting pre-existing memories is then a critical brain function. However, learning and memory interact: prior knowledge can proactively influence learning, and new information can retroactively modify memories of past events. The hippocampus is a brain region essential for learning and memory, but the network-level operations that underlie the continuous integration of new experiences into memory, segregating them as discrete traces while enabling their interaction, are unknown. Here I show a network mechanism by which two distinct memories interact. Hippocampal CA1 neuron ensembles were monitored in mice as they explored a familiar environment before and after forming a new place-reward memory in a different environment. By employing a network science representation of the co-firing relationships among principal cells, I first found that new associative learning modifies the topology of the cells’ co-firing patterns representing the unrelated familiar environment. I fur- ther observed that these neuronal co-firing graphs evolved along three functional axes: the first segregated novelty; the second distinguished individual novel be- havioural experiences; while the third revealed cross-memory interaction. Finally, I found that during this process, high activity principal cells rapidly formed the core representation of each memory; whereas low activity principal cells gradually joined co-activation motifs throughout individual experiences, enabling cross-memory in- teractions. These findings reveal an organizational principle of brain networks where high and low activity cells are differentially recruited into coactivity motifs as build- ing blocks for the flexible integration and interaction of memories.
Finally, I employ a set of manifold learning and related approaches to explore and characterise the complex neural population dynamics within CA1 that underlie sim- ple exploration.Open Acces
Evidence of dynamics and disorder using NMR-spectroscopic techniques applied to human Flap-Endonuclease-1
Flap endonuclease 1 (FEN1) is a member of a 5’ nuclease superfamily involved in DNA replication and repair. FEN1 hydrolyses the phosphodiester bond one nucleotide into the duplex region of bifurcated double-flapped structures as found in lagging-strand DNA synthesis. These flap structures need to be cut in a very specific manner on the order of around 106 times per cell cycle. Therefore FEN1 is seen as an essential enzyme that maintains genomic integrity across all life forms. How FEN1 achieves its molecular recognition for a chemically very similar but structurally different DNA substrate and how it achieves catalysis on a biochemically relevant timescale are key questions to understand the protein system. This thesis describes some of the mechanistic studies used to understand how the structure and dynamics of hFEN1 relate to its function.
It was proposed that T5 bacteriophage FEN was a catalytically perfect, or diffusion-limited enzyme, yet its main rate-limiting step after substrate binding was non-chemistry related. To ascertain whether this was true for hFEN1, the effect of leaving group pKa using 2’ modified double flapped substrates on rates of catalysis was measured. It was found that both apparent second order rates and first order single turnover rates of catalysis were insensitive to leaving group pKa. Furthermore by supplementing the reaction with glycerol, an unexpectedly high viscosity dependence was observed. The explanation for this is likely the presence of another physical step in the catalytic cycle affected by viscosity.
Previous structural and biophysical studies of hFEN1 identified a helical arch, which was thought to be disordered as it could accommodate bulky 5’ flaps through it. Furthermore, the arch is key for positioning the 5’ flap into the active site. Using NMR spectroscopic techniques the solution state conformation of apo-hFEN1 was analysed. The arch was found to be disordered, but the C- terminal portion of it was transiently sampling α-helical φ,ψ space, while the other half was in an extended conformation. Another DNA recognition region, the α2-α3 loop was also found to be disordered.
Various ligands and substrates were found to alter the structure and the dynamics of hFEN1. Addition of substrate DNA slowed the motion of the arch and α2-α3 loop to a millisecond timescale. Equally addition of a single monophosphate nucleotide had an effect on the dynamics of the top of the arch, despite binding in the active site. Furthermore, titration of calcium ions into the active site when DNA was present on the enzyme resulted in large perturbations to substrate recognition sites distant from the active site. This potentially links the specificity of these regions to activity within the active site
- …