87,262 research outputs found

    Similar exemplar pooling processes underlie the learning of facial identity and handwriting style: Evidence from typical observers and individuals with Autism

    Get PDF
    Considerable research has addressed whether the cognitive and neural representations recruited by faces are similar to those engaged by other types of visual stimuli. For example, research has examined the extent to which objects of expertise recruit holistic representation and engage the fusiform face area. Little is known, however, about the domain-specificity of the exemplar pooling processes thought to underlie the acquisition of familiarity with particular facial identities. In the present study we sought to compare observers’ ability to learn facial identities and handwriting styles from exposure to multiple exemplars. Crucially, while handwritten words and faces differ considerably in their topographic form, both learning tasks share a common exemplar pooling component. In our first experiment, we find that typical observers’ ability to learn facial identities and handwriting styles from exposure to multiple exemplars correlates closely. In our second experiment, we show that observers with autism spectrum disorder (ASD) are impaired at both learning tasks. Our findings suggest that similar exemplar pooling processes are recruited when learning facial identities and handwriting styles. Models of exemplar pooling originally developed to explain face learning, may therefore offer valuable insights into exemplar pooling across a range of domains, extending beyond faces. Aberrant exemplar pooling, possibly resulting from structural differences in the inferior longitudinal fasciculus, may underlie difficulties recognising familiar faces often experienced by individuals with ASD, and leave observers overly reliant on local details present in particular exemplars

    Gamma oscillations as integrators of local competition for activity and global competition for coherence

    Get PDF
    Poster presentation: Introduction Rhythmic synchronization of neural activity in the gamma-frequency range (30–100 Hz) was observed in many brain regions; see the review in [1]. The functional relevance of these oscillations remains to be clarified, a task that requires modeling of the relevant aspects of information processing. The temporal correlation hypothesis, reviewed in [2], proposes that the temporal correlation of neural units provides a means to group the neural units into so-called neural assemblies that are supposed to represent mental objects. Here, we approach the modeling of the temporal grouping of neural units from the perspective of oscillatory neural network systems based on phase model oscillators. Patterns are assumed to be stored in the network based on Hebbian memory and assemblies are identified with phase-locked subset of these patterns. Going beyond foregoing discussions, we demonstrate the combination of two recently discussed mechanisms, referred to as "acceleration" [3] and "pooling" [4]. The combination realizes in a complementary manner a competition for activity on a local scale, while providing a competition for coherence among different assemblies on a non-local scale. ..

    Attentive Convolution: Equipping CNNs with RNN-style Attention Mechanisms

    Get PDF
    In NLP, convolutional neural networks (CNNs) have benefited less than recurrent neural networks (RNNs) from attention mechanisms. We hypothesize that this is because the attention in CNNs has been mainly implemented as attentive pooling (i.e., it is applied to pooling) rather than as attentive convolution (i.e., it is integrated into convolution). Convolution is the differentiator of CNNs in that it can powerfully model the higher-level representation of a word by taking into account its local fixed-size context in the input text t^x. In this work, we propose an attentive convolution network, ATTCONV. It extends the context scope of the convolution operation, deriving higher-level features for a word not only from local context, but also information extracted from nonlocal context by the attention mechanism commonly used in RNNs. This nonlocal context can come (i) from parts of the input text t^x that are distant or (ii) from extra (i.e., external) contexts t^y. Experiments on sentence modeling with zero-context (sentiment analysis), single-context (textual entailment) and multiple-context (claim verification) demonstrate the effectiveness of ATTCONV in sentence representation learning with the incorporation of context. In particular, attentive convolution outperforms attentive pooling and is a strong competitor to popular attentive RNNs.Comment: Camera-ready for TACL. 16 page
    • …
    corecore