449 research outputs found

    Modelling peri-perceptual brain processes in a deep learning spiking neural network architecture

    Get PDF
    Familiarity of marketing stimuli may affect consumer behaviour at a peri-perceptual processing level. The current study introduces a method for deep learning of electroencephalogram (EEG) data using a spiking neural network (SNN) approach that reveals the complexity of peri-perceptual processes of familiarity. The method is applied to data from 20 participants viewing familiar and unfamiliar logos. The results support the potential of SNN models as novel tools in the exploration of peri-perceptual mechanisms that respond differentially to familiar and unfamiliar stimuli. Specifically, the activation pattern of the time-locked response identified by the proposed SNN model at approximately 200 milliseconds post-stimulus suggests greater connectivity and more widespread dynamic spatio-temporal patterns for familiar than unfamiliar logos. The proposed SNN approach can be applied to study other peri-perceptual or perceptual brain processes in cognitive and computational neuroscience

    The Timing of Vision – How Neural Processing Links to Different Temporal Dynamics

    Get PDF
    In this review, we describe our recent attempts to model the neural correlates of visual perception with biologically inspired networks of spiking neurons, emphasizing the dynamical aspects. Experimental evidence suggests distinct processing modes depending on the type of task the visual system is engaged in. A first mode, crucial for object recognition, deals with rapidly extracting the glimpse of a visual scene in the first 100 ms after its presentation. The promptness of this process points to mainly feedforward processing, which relies on latency coding, and may be shaped by spike timing-dependent plasticity (STDP). Our simulations confirm the plausibility and efficiency of such a scheme. A second mode can be engaged whenever one needs to perform finer perceptual discrimination through evidence accumulation on the order of 400 ms and above. Here, our simulations, together with theoretical considerations, show how predominantly local recurrent connections and long neural time-constants enable the integration and build-up of firing rates on this timescale. In particular, we review how a non-linear model with attractor states induced by strong recurrent connectivity provides straightforward explanations for several recent experimental observations. A third mode, involving additional top-down attentional signals, is relevant for more complex visual scene processing. In the model, as in the brain, these top-down attentional signals shape visual processing by biasing the competition between different pools of neurons. The winning pools may not only have a higher firing rate, but also more synchronous oscillatory activity. This fourth mode, oscillatory activity, leads to faster reaction times and enhanced information transfers in the model. This has indeed been observed experimentally. Moreover, oscillatory activity can format spike times and encode information in the spike phases with respect to the oscillatory cycle. This phenomenon is referred to as “phase-of-firing coding,” and experimental evidence for it is accumulating in the visual system. Simulations show that this code can again be efficiently decoded by STDP. Future work should focus on continuous natural vision, bio-inspired hardware vision systems, and novel experimental paradigms to further distinguish current modeling approaches

    Online multiclass EEG feature extraction and recognition using modified convolutional neural network method

    Get PDF
    Many techniques have been introduced to improve both brain-computer interface (BCI) steps: feature extraction and classification. One of the emerging trends in this field is the implementation of deep learning algorithms. There is a limited number of studies that investigated the application of deep learning techniques in electroencephalography (EEG) feature extraction and classification. This work is intended to apply deep learning for both stages: feature extraction and classification. This paper proposes a modified convolutional neural network (CNN) feature extractorclassifier algorithm to recognize four different EEG motor imagery (MI). In addition, a four-class linear discriminant analysis (LDR) classifier model was built and compared to the proposed CNN model. The paper showed very good results with 92.8% accuracy for one EEG four-class MI set and 85.7% for another set. The results showed that the proposed CNN model outperforms multi-class linear discriminant analysis with an accuracy increase of 28.6% and 17.9% for both MI sets, respectively. Moreover, it has been shown that majority voting for five repetitions introduced an accuracy advantage of 15% and 17.2% for both EEG sets, compared with single trials. This confirms that increasing the number of trials for the same MI gesture improves the recognition accurac

    The role of the hippocampus and dorsolateral prefrontal cortex in implicit learning of contextual information

    Get PDF
    The intrinsic brain property to automatically detect and encode repeated regularities or contexts present in the environment is essential for organizing information about the environment and guides many aspects of our behavior, including attention. Decades of research into the neurocognitive mechanisms of attention have revealed that visual attention can be controlled by perceptually salient information (bottom-up) or by internal goals and expectations (top-down). However, recent findings have shown that implicit contextual memory (ICM) also plays an important role in guiding attention. Despite the importance of implicit contextual memory in cognition, it is unclear how the brain encodes and retrieves implicit contextual memories, translates them into an attentional control signal, and interacts with the ventral and dorsal frontoparietal attention networks to control deployment of visual attention. In this thesis, I answer a number of questions about the role of the hippocampus and the DLPFC in implicit contextual memory-guided attention. First, I combine automated segmentation of structural MRI with neurobehavioral assessment of implicit contextual memory-guided attention to test the hypothesis that hippocampal volume would predict the magnitude of implicit contextual learning. Forty healthy subjects underwent 3T magnetic resonance imaging brain scanning with subsequent automatic measurement of the total brain and hippocampal (right and left) volumes. Implicit learning of contextual information was measured using the contextual cueing task. It was shown that both, left and right hippocampal volumes positively predict implicit contextual memory performance. This result provides new evidence for hippocampal involvement in implicit contextual memory-guided attention. Next, I used continuous theta burst stimulation (cTBS) combined with electroencephalography (EEG) to test whether transient disruption of the DLPFC would interfere with implicit learning performance and related electrical brain activity. I applied neuronavigation-guided cTBS to the DLPFC or to the vertex as a control region, prior to the performance of an implicit contextual learning task. It was shown that a transient disruption of the function of the left DLPFC leads to significant enhancement of implicit contextual memory performance. This finding provides novel causal evidence for the interfering role of DLPFC-mediated top-down control on implicit memory-guided attention. Additionally, it was shown that cTBS applied over the left DLPFC significantly decreased task-related beta-band oscillatory activity, suggesting that beta-band oscillatory activity is an index of DLPFC-mediated top-down cognitive control. Together, these results shed light on how implicit memory-guided attention is implemented in the brain
    • …
    corecore