16,000 research outputs found
Connectionism, Analogicity and Mental Content
In Connectionism and the Philosophy of Psychology, Horgan and Tienson (1996) argue that cognitive
processes, pace classicism, are not governed by exceptionless, representation-level rules; they
are instead the work of defeasible cognitive tendencies subserved by the non-linear dynamics of
the brains neural networks. Many theorists are sympathetic with the dynamical characterisation
of connectionism and the general (re)conception of cognition that it affords. But in all the
excitement surrounding the connectionist revolution in cognitive science, it has largely gone
unnoticed that connectionism adds to the traditional focus on computational processes, a new
focus one on the vehicles of mental representation, on the entities that carry content through the
mind. Indeed, if Horgan and Tiensons dynamical characterisation of connectionism is on the
right track, then so intimate is the relationship between computational processes and
representational vehicles, that connectionist cognitive science is committed to a resemblance
theory of mental content
Neural scaling laws for an uncertain world
Autonomous neural systems must efficiently process information in a wide
range of novel environments, which may have very different statistical
properties. We consider the problem of how to optimally distribute receptors
along a one-dimensional continuum consistent with the following design
principles. First, neural representations of the world should obey a neural
uncertainty principle---making as few assumptions as possible about the
statistical structure of the world. Second, neural representations should
convey, as much as possible, equivalent information about environments with
different statistics. The results of these arguments resemble the structure of
the visual system and provide a natural explanation of the behavioral
Weber-Fechner law, a foundational result in psychology. Because the derivation
is extremely general, this suggests that similar scaling relationships should
be observed not only in sensory continua, but also in neural representations of
``cognitive' one-dimensional quantities such as time or numerosity
Does the number sense represent number?
On a now orthodox view, humans and many other animals are endowed with a “number sense”, or approximate number system (ANS), that represents number. Recently, this orthodox view has been subject to numerous critiques, with critics maintaining either that numerical content is absent altogether, or else that some primitive analog of number (‘numerosity’) is represented as opposed to number itself. We distinguish three arguments for these claims – the arguments from congruency, confounds, and imprecision – and show that none succeed. We then highlight positive reasons for thinking that the ANS genuinely represents numbers. The upshot is that proponents of the orthodox view should not feel troubled by recent critiques of their position
Recommended from our members
Sequential presentation protects working memory from catastrophic interference
Neural network models of memory are notorious for catastrophic interference: old items are forgotten as new items are memorized (e.g., French, 1999; McCloskey & Cohen, 1989). While Working Memory (WM) in human adults shows severe capacity limitations, these capacity limitations do not reflect neural-network style catastrophic interference. However, our ability to quickly apprehend the numerosity of small sets of objects (i.e., subitizing) does show catastrophic capacity limitations, and this subitizing capacity and WM might reflect a common capacity. Accordingly, computational investigations (Knops, Piazza, Sengupta, Eger, & Melcher, 2014; Sengupta, Surampudi, & Melcher, 2014) suggest that mutual inhibition among neurons can explain both kinds of capacity limitations as well as why our ability to estimate the numerosity of larger sets is limited according to a Weber ratio signature. Based on simulations with a saliency map-like network and mathematical proofs, we provide three results. First, mutual inhibition among neurons leads to catastrophic interference when items are presented simultaneously. The network can remember a limited number of items, but when more items are presented, the network forgets all of them. Second, if memory items are presented sequentially rather than simultaneously, the network remembers the most recent items rather than forgetting all of them. Hence, the tendency in WM tasks to sequentially attend even to simultaneously presented items might not only reflect attentional limitations, but an adaptive strategy to avoid catastrophic interference. Third, the mean activation level in the network can be used to estimate the number of items in small sets, but does not accurately reflect the number of items in larger sets. Rather, we suggest that the Weber ratio signature of large number discrimination emerges naturally from the interaction between the limited precision of a numeric estimation system and a multiplicative gain control mechanism
Adversarial attacks hidden in plain sight
Convolutional neural networks have been used to achieve a string of successes
during recent years, but their lack of interpretability remains a serious
issue. Adversarial examples are designed to deliberately fool neural networks
into making any desired incorrect classification, potentially with very high
certainty. Several defensive approaches increase robustness against adversarial
attacks, demanding attacks of greater magnitude, which lead to visible
artifacts. By considering human visual perception, we compose a technique that
allows to hide such adversarial attacks in regions of high complexity, such
that they are imperceptible even to an astute observer. We carry out a user
study on classifying adversarially modified images to validate the perceptual
quality of our approach and find significant evidence for its concealment with
regards to human visual perception
- …