3,725 research outputs found
Recommended from our members
Neural oscillations during conditional associative learning.
Associative learning requires mapping between complex stimuli and behavioural responses. When multiple stimuli are involved, conditional associative learning is a gradual process with learning based on trial and error. It is established that a distributed network of regions track associative learning, however the role of neural oscillations in human learning remains less clear. Here we used scalp EEG to test how neural oscillations change during learning of arbitrary visuo-motor associations. Participants learned to associative 48 different abstract shapes to one of four button responses through trial and error over repetitions of the shapes. To quantify how well the associations were learned for each trial, we used a state-space computational model of learning that provided a probability of each trial being correct given past performance for that stimulus, that we take as a measure of the strength of the association. We used linear modelling to relate single-trial neural oscillations to single-trial measures of association strength. We found frontal midline theta oscillations during the delay period tracked learning, where theta activity was strongest during the early stages of learning and declined as the associations were formed. Further, posterior alpha and low-beta oscillations in the cue period showed strong desynchronised activity early in learning, while stronger alpha activity during the delay period was seen as associations became well learned. Moreover, the magnitude of these effects during early learning, before the associations were learned, related to improvements in memory seen on the next presentation of the stimulus. The current study provides clear evidence that frontal theta and posterior alpha/beta oscillations play a key role during associative memory formation
A Survey of Adaptive Resonance Theory Neural Network Models for Engineering Applications
This survey samples from the ever-growing family of adaptive resonance theory
(ART) neural network models used to perform the three primary machine learning
modalities, namely, unsupervised, supervised and reinforcement learning. It
comprises a representative list from classic to modern ART models, thereby
painting a general picture of the architectures developed by researchers over
the past 30 years. The learning dynamics of these ART models are briefly
described, and their distinctive characteristics such as code representation,
long-term memory and corresponding geometric interpretation are discussed.
Useful engineering properties of ART (speed, configurability, explainability,
parallelization and hardware implementation) are examined along with current
challenges. Finally, a compilation of online software libraries is provided. It
is expected that this overview will be helpful to new and seasoned ART
researchers
Automated construction of a hierarchy of self-organized neural network classifiers
This paper documents an effort to design and implement a neural network-based, automatic classification system which dynamically constructs and trains a decision tree. The system is a combination of neural network and decision tree technology. The decision tree is constructed to partition a large classification problem into smaller problems. The neural network modules then solve these smaller problems. We used a variant of the Fuzzy ARTMAP neural network which can be trained much more quickly than traditional neural networks. The research extends the concept of self-organization from within the neural network to the overall structure of the dynamically constructed decision hierarchy. The primary advantage is avoidance of manual tedium and subjective bias in constructing decision hierarchies. Additionally, removing the need for manual construction of the hierarchy opens up a large class of potential classification applications. When tested on data from real-world images, the automatically generated hierarchies performed slightly better than an intuitive (handbuilt) hierarchy. Because the neural networks at the nodes of the decision hierarchy are solving smaller problems, generalization performance can really be improved if the number of features used to solve these problems is reduced. Algorithms for automatically selecting which features to use for each individual classification module were also implemented. We were able to achieve the same level of performance as in previous manual efforts, but in an efficient, automatic manner. The technology developed has great potential in a number of commercial areas, including data mining, pattern recognition, and intelligent interfaces for personal computer applications. Sample applications include: fraud detection, bankruptcy prediction, data mining agent, scalable object recognition system, email agent, resource librarian agent, and a decision aid agent
Methods of Hierarchical Clustering
We survey agglomerative hierarchical clustering algorithms and discuss
efficient implementations that are available in R and other software
environments. We look at hierarchical self-organizing maps, and mixture models.
We review grid-based clustering, focusing on hierarchical density-based
approaches. Finally we describe a recently developed very efficient (linear
time) hierarchical clustering algorithm, which can also be viewed as a
hierarchical grid-based algorithm.Comment: 21 pages, 2 figures, 1 table, 69 reference
Group invariance principles for causal generative models
The postulate of independence of cause and mechanism (ICM) has recently led
to several new causal discovery algorithms. The interpretation of independence
and the way it is utilized, however, varies across these methods. Our aim in
this paper is to propose a group theoretic framework for ICM to unify and
generalize these approaches. In our setting, the cause-mechanism relationship
is assessed by comparing it against a null hypothesis through the application
of random generic group transformations. We show that the group theoretic view
provides a very general tool to study the structure of data generating
mechanisms with direct applications to machine learning.Comment: 16 pages, 6 figure
Disordered semantic representation in schizophrenic temporal cortex revealed by neuromagnetic response patterns
BACKGROUND: Loosening of associations and thought disruption are key features of schizophrenic psychopathology. Alterations in neural networks underlying this basic abnormality have not yet been sufficiently identified. Previously, we demonstrated that spatio-temporal clustering of magnetic brain responses to pictorial stimuli map categorical representations in temporal cortex. This result has opened the possibility to quantify associative strength within and across semantic categories in schizophrenic patients. We hypothesized that in contrast to controls, schizophrenic patients exhibit disordered representations of semantic categories. METHODS: The spatio-temporal clusters of brain magnetic activities elicited by object pictures related to super-ordinate (flowers, animals, furniture, clothes) and base-level (e.g. tulip, rose, orchid, sunflower) categories were analysed in the source space for the time epochs 170–210 and 210–450 ms following stimulus onset and were compared between 10 schizophrenic patients and 10 control subjects. RESULTS: Spatio-temporal correlations of responses elicited by base-level concepts and the difference of within vs. across super-ordinate categories were distinctly lower in patients than in controls. Additionally, in contrast to the well-defined categorical representation in control subjects, unsupervised clustering indicated poorly defined representation of semantic categories in patients. Within the patient group, distinctiveness of categorical representation in the temporal cortex was positively related to negative symptoms and tended to be inversely related to positive symptoms. CONCLUSION: Schizophrenic patients show a less organized representation of semantic categories in clusters of magnetic brain responses than healthy adults. This atypical neural network architecture may be a correlate of loosening of associations, promoting positive symptoms
Information visualization for DNA microarray data analysis: A critical review
Graphical representation may provide effective means of making sense of the complexity and sheer volume of data produced by DNA microarray experiments that monitor the expression patterns of thousands of genes simultaneously. The ability to use ldquoabstractrdquo graphical representation to draw attention to areas of interest, and more in-depth visualizations to answer focused questions, would enable biologists to move from a large amount of data to particular records they are interested in, and therefore, gain deeper insights in understanding the microarray experiment results. This paper starts by providing some background knowledge of microarray experiments, and then, explains how graphical representation can be applied in general to this problem domain, followed by exploring the role of visualization in gene expression data analysis. Having set the problem scene, the paper then examines various multivariate data visualization techniques that have been applied to microarray data analysis. These techniques are critically reviewed so that the strengths and weaknesses of each technique can be tabulated. Finally, several key problem areas as well as possible solutions to them are discussed as being a source for future work
Onion Curve: A Space Filling Curve with Near-Optimal Clustering
Space filling curves (SFCs) are widely used in the design of indexes for
spatial and temporal data. Clustering is a key metric for an SFC, that measures
how well the curve preserves locality in moving from higher dimensions to a
single dimension. We present the {\em onion curve}, an SFC whose clustering
performance is provably close to optimal for the cube and near-cube shaped
query sets, irrespective of the side length of the query. We show that in
contrast, the clustering performance of the widely used Hilbert curve can be
far from optimal, even for cube-shaped queries. Since the clustering
performance of an SFC is critical to the efficiency of multi-dimensional
indexes based on the SFC, the onion curve can deliver improved performance for
data structures involving multi-dimensional data.Comment: The short version is published in ICDE 1
Structure in the 3D Galaxy Distribution: I. Methods and Example Results
Three methods for detecting and characterizing structure in point data, such
as that generated by redshift surveys, are described: classification using
self-organizing maps, segmentation using Bayesian blocks, and density
estimation using adaptive kernels. The first two methods are new, and allow
detection and characterization of structures of arbitrary shape and at a wide
range of spatial scales. These methods should elucidate not only clusters, but
also the more distributed, wide-ranging filaments and sheets, and further allow
the possibility of detecting and characterizing an even broader class of
shapes. The methods are demonstrated and compared in application to three data
sets: a carefully selected volume-limited sample from the Sloan Digital Sky
Survey redshift data, a similarly selected sample from the Millennium
Simulation, and a set of points independently drawn from a uniform probability
distribution -- a so-called Poisson distribution. We demonstrate a few of the
many ways in which these methods elucidate large scale structure in the
distribution of galaxies in the nearby Universe.Comment: Re-posted after referee corrections along with partially re-written
introduction. 80 pages, 31 figures, ApJ in Press. For full sized figures
please download from: http://astrophysics.arc.nasa.gov/~mway/lss1.pd
- …