10 research outputs found
Redistribution of Synaptic Efficacy Supports Stable Pattern Learning in Neural Networks
Markram and Tsodyks, by showing that the elevated synaptic efficacy observed with single-pulse LTP measurements disappears with higher-frequency test pulses, have critically challenged the conventional assumption that LTP reflects a general gain increase. Redistribution of synaptic efficacy (RSE) is here seen as the local realization of a global design principle in a neural network for pattern coding. As is typical of many coding systems, the network learns by dynamically balancing a pattern-independent increase in strength against a pattern-specific increase in selectivity. This computation is implemented by a monotonic long-term memory process which has a bidirectional effect on the postsynaptic potential via functionally complementary signal components. These frequency-dependent and frequency-independent components realize the balance between specific and nonspecific functions at each synapse. This synaptic balance suggests a functional purpose for RSE which, by dynamically bounding total memory change, implements a distributed coding scheme which is stable with fast as well as slow learning. Although RSE would seem to make it impossible to code high-frequency input features, a network preprocessing step called complement coding symmetrizes the input representation, which allows the system to encode high-frequency as well as low-frequency features in an input pattern. A possible physical model interprets the two synaptic signal components in terms of ligand-gated and voltage-gated receptors, where learning converts channels from one type to another.Office of Naval Research and the Defense Advanced Research Projects Agency (N00014-95-1-0409, N00014-1-95-0657
Distributed ARTMAP
Distributed coding at the hidden layer of a multi-layer perceptron (MLP) endows the network with memory compression and noise tolerance capabilities. However, an MLP typically requires slow off-line learning to avoid catastrophic forgetting in an open input environment. An adaptive resonance theory (ART) model is designed to guarantee stable memories even with fast on-line learning. However, ART stability typically requires winner-take-all coding, which may cause category proliferation in a noisy input environment. Distributed ARTMAP (dARTMAP) seeks to combine the computational advantages of MLP and ART systems in a real-time neural network for supervised learning. This system incorporates elements of the unsupervised dART model as well as new features, including a content-addressable memory (CAM) rule. Simulations show that dARTMAP retains fuzzy ARTMAP accuracy while significantly improving memory compression. The model's computational learning rules correspond to paradoxical cortical data.Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657
dARTMAP: A Neural Network for Fast Distributed Supervised Learning
Distributed coding at the hidden layer of a multi-layer perceptron (MLP) endows the network with memory compression and noise tolerance capabilities. However, an MLP typically requires slow off-line learning to avoid catastrophic forgetting in an open input environment. An adaptive resonance theory (ART) model is designed to guarantee stable memories even with fast on-line learning. However, ART stability typically requires winner-take-all coding, which may cause category proliferation in a noisy input environment. Distributed ARTMAP (dARTMAP) seeks to combine the computational advantages of MLP and ART systems in a real-time neural network for supervised learning, An implementation algorithm here describes one class of dARTMAP networks. This system incorporates elements of the unsupervised dART model as well as new features, including a content-addressable memory (CAM) rule for improved contrast control at the coding field. A dARTMAP system reduces to fuzzy ARTMAP when coding is winner-take-all. Simulations show that dARTMAP retains fuzzy ARTMAP accuracy while significantly improving memory compression.National Science Foundation (IRI-94-01659); Office of Naval Research (N00014-95-1-0409, N00014-95-0657
Data
Abstract- Data mining on high-dimensional heterogeneous data is a crucial component in information fusion application domains such as remote sensing, surveillance, and homeland security. The information processing requirements of these domains place a premium on security, robustness, performance, and sophisticated analytic methods. This paper introduces a database-centric approach that enables data mining and analysis of data that typically interest the information fusion community. The approach benefits from the inherent security, reliability, and scalability found in contemporary RDBMSs. The capabilities of this approach are demonstrated on satellite imagery. Hyperspectral data are mined using clustering (O-Cluster) and classification (Support Vector Machines) techniques. The data mining is performed inside the database, which ensures maintenance of data integrity and security throughout the analytic effort. Within the database, the clustering and classification results can be further combined with spatial processing components to enable additional analysis
O-cluster: scalable clustering of large high dimensional data sets
Clustering large data sets of high dimensionality has always been a serious challenge for clustering algorithms. Many recently developed clustering algorithms have attempted to address either handling data sets with very large number of records or data sets with very high number of dimensions. This paper provides a discussion of the advantages and limitations of existing algorithms when they operate on very large multidimensional data sets. To simultaneously overcome both the “curse of dimensionality ” and the scalability problems associated with large amounts of data, we propose a new clustering algorithm called O-Cluster. This new clustering method combines a novel active sampling technique with an axis-parallel partitioning strategy to identify continuous areas of high density in the input space. The method operates on a limited memory buffer and requires at most a single scan through the data. We demonstrate the high quality of the obtained clustering solutions, their robustness to noise, and O-Cluster’s excellent scalability. 1