33,904 research outputs found
Artificial Stupidity: A Reply
Murphy, Koehler, and Fogler [1997] gave in the last issue of the Journal of Portfolio Management an account of how to raise a neural net’s IQ. The purpose of this reply is to point out some of the general difficulties with neural nets. Also, I would like to mention an alternative method, namely Pade approximants, which does not suffer from these difficulties.Artifical; Stupidity; Neural Networks
Brain network dynamics in high-functioning individuals with autism
Theoretically, autism should be underpinned by aberrant brain dynamics. However, how brain activity changes over time in individuals with autism spectrum disorder (ASD) remains unknown. Here we characterize brain dynamics in autism using an energy-landscape analysis applied to resting-state fMRI data. Whereas neurotypical brain activity frequently transits between two major brain states via an intermediate state, high-functioning adults with ASD show fewer neural transitions due to an unstable intermediate state, and these infrequent transitions predict the severity of autism. Moreover, in contrast to the controls whose IQ is correlated with the neural transition frequency, IQ scores of individuals with ASD are instead predicted by the stability of their brain dynamics. Finally, such brain-behaviour associations are related to functional segregation between brain networks. These findings suggest that atypical functional coordination in the brains of adults with ASD underpins overly stable neural dynamics, which supports both their ASD symptoms and cognitive abilities
Recommended from our members
Cognitive Reserve and Alzheimer Disease
Epidemiologic evidence suggests that individuals with higher IQ, education, occupational attainment, or participation in leisure activities have a reduced risk of developing Alzheimer disease (AD). The concept of cognitive reserve (CR) posits that individual differences in how tasks are processed provide differential reserve against brain pathology or age-related changes. This may take 2 forms. In neural reserve, preexisting brain networks that are more efficient or have greater capacity may be less susceptible to disruption. In neural compensation, alternate networks may compensate for pathology’s disruption of preexisting networks. Imaging studies have begun to identify the neural substrate of CR. Because CR may modulate the clinical expression of AD pathology, it is an important consideration in studies of ‘‘preclinical’’ AD and treatment studies. There is also the possibility that directly enhancing CR may help forestall the diagnosis of AD
Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks
Empirical studies show that gradient-based methods can learn deep neural
networks (DNNs) with very good generalization performance in the
over-parameterization regime, where DNNs can easily fit a random labeling of
the training data. Very recently, a line of work explains in theory that with
over-parameterization and proper random initialization, gradient-based methods
can find the global minima of the training loss for DNNs. However, existing
generalization error bounds are unable to explain the good generalization
performance of over-parameterized DNNs. The major limitation of most existing
generalization bounds is that they are based on uniform convergence and are
independent of the training algorithm. In this work, we derive an
algorithm-dependent generalization error bound for deep ReLU networks, and show
that under certain assumptions on the data distribution, gradient descent (GD)
with proper random initialization is able to train a sufficiently
over-parameterized DNN to achieve arbitrarily small generalization error. Our
work sheds light on explaining the good generalization performance of
over-parameterized deep neural networks.Comment: 27 pages. This version simplifies the proof and improves the
presentation in Version 3. In AAAI 202
- …