13 research outputs found

    The Sync-Fire/deSync Model: modelling the reactivation of dynamic memories from cortical alpha oscillations

    Get PDF
    We propose a neural network model to explore how humans can learn and accurately retrieve temporal sequences, such as melodies, movies, or other dynamic content. We identify target memories by their neural oscillatory signatures, as shown in recent human episodic memory paradigms. Our model comprises three plausible components for the binding of temporal content, where each component imposes unique limitations on the encoding and representation of that content. A cortical component actively represents sequences through the disruption of an intrinsically generated alpha rhythm, where a desynchronisation marks information-rich operations as the literature predicts. A binding component converts each event into a discrete index, enabling repetitions through a sparse encoding of events. A timing component – consisting of an oscillatory “ticking clock” made up of hierarchical synfire chains – discretely indexes a moment in time. By encoding the absolute timing between discretised events, we show how one can use cortical desynchronisations to dynamically detect unique temporal signatures as they are reactivated in the brain. We validate this model by simulating a series of events where sequences are uniquely identifiable by analysing phasic information, as several recent EEG/MEG studies have shown. As such, we show how one can encode and retrieve complete episodic memories where the quality of such memories is modulated by the following: alpha gate keepers to content representation; binding limitations that induce a blink in temporal perception; and nested oscillations that provide preferential learning phases in order to temporally sequence events

    An Adaptive Hierarchical Model of the Ventral Visual Pathway Implemented on a Mobile Robot

    No full text
    The implementation of an adaptive visual system founded on the detection of spatio-temporal invariances is described. It is a layered system inspired by the hierarchical processing in the mammalian ventral visual pathway, and models retinal, early cortical and infero-temporal components. A representation of scenes in terms of slowly varying spatiotemporal signatures is discovered through maximising a measure of temporal predictability. This supports categorisation of the environment by a set of view cells (view-trained units or VTUs [1]) that demonstrate substantial invariance to transformations of viewpoint and scale

    A quantitative link between face discrimination deficits and neuronal selectivity for faces in autism

    Get PDF
    Individuals with Autism Spectrum Disorder (ASD) appear to show a general face discrimination deficit across a range of tasks including social–emotional judgments as well as identification and discrimination. However, functional magnetic resonance imaging (fMRI) studies probing the neural bases of these behavioral differences have produced conflicting results: while some studies have reported reduced or no activity to faces in ASD in the Fusiform Face Area (FFA), a key region in human face processing, others have suggested more typical activation levels, possibly reflecting limitations of conventional fMRI techniques to characterize neuron-level processing. Here, we test the hypotheses that face discrimination abilities are highly heterogeneous in ASD and are mediated by FFA neurons, with differences in face discrimination abilities being quantitatively linked to variations in the estimated selectivity of face neurons in the FFA. Behavioral results revealed a wide distribution of face discrimination performance in ASD, ranging from typical performance to chance level performance. Despite this heterogeneity in perceptual abilities, individual face discrimination performance was well predicted by neural selectivity to faces in the FFA, estimated via both a novel analysis of local voxel-wise correlations, and the more commonly used fMRI rapid adaptation technique. Thus, face processing in ASD appears to rely on the FFA as in typical individuals, differing quantitatively but not qualitatively. These results for the first time mechanistically link variations in the ASD phenotype to specific differences in the typical face processing circuit, identifying promising targets for interventions
    corecore