112,652 research outputs found

    Rapid modulation of sensory processing induced by stimulus conflict

    Get PDF
    Humans are constantly confronted with environmental stimuli that conflict with task goals and can interfere with successful behavior. Prevailing theories propose the existence of cognitive control mechanisms that can suppress the processing of conflicting input and enhance that of the relevant input. However, the temporal cascade of brain processes invoked in response to conflicting stimuli remains poorly understood. By examining evoked electrical brain responses in a novel, hemifield-specific, visual-flanker task, we demonstrate that task-irrelevant conflicting stimulus input is quickly detected in higher level executive regions while simultaneously inducing rapid, recurrent modulation of sensory processing in the visual cortex. Importantly, however, both of these effects are larger for individuals with greater incongruency-related RT slowing. The combination of neural activation patterns and behavioral interference effects suggest that this initial sensory modulation induced by conflicting stimulus inputs reflects performance-degrading attentional distraction because of their incompatibility rather than any rapid task-enhancing cognitive control mechanisms. The present findings thus provide neural evidence for a model in which attentional distraction is the key initial trigger for the temporal cascade of processes by which the human brain responds to conflicting stimulus input in the environment

    Differential modulation of performance in insight and divergent thinking tasks with tDCS

    Get PDF
    While both insight and divergent thinking tasks are used to study creativity, there are reasons to believe that the two may call upon very different mechanisms. To explore this hypothesis, we administered a verbal insight task (riddles) and a divergent thinking task (verbal fluency) to 16 native English speakers and 16 non-native English speakers after they underwent Transcranial Direct Current Stimulation (tDCS) of the left middle temporal gyrus and right temporo- parietal junction. We found that, in the case of the insight task the depolarization of right temporo-parietal junction and hyperpolarization of left middle temporal gyrus resulted in increased performance, relative to both the control condition and the reverse stimulation condition in both groups (non-native > native speakers). However, in the case of the divergent thinking task, the same pattern of stimulation resulted in a decrease in performance, compared to the reverse stimulation condition, in the non-native speakers. We explain this dissociation in terms of differing task demands of divergent thinking and insight tasks and speculate that the greater sensitivity of non-native speakers to tDCS stimulation may be a function of less entrenched neural networks for non-native languages

    Contrastive Hebbian Learning with Random Feedback Weights

    Full text link
    Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning

    End-to-End Differentiable Proving

    Get PDF
    We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.Comment: NIPS 2017 camera-ready, NIPS 201

    True zero-training brain-computer interfacing: an online study

    Get PDF
    Despite several approaches to realize subject-to-subject transfer of pre-trained classifiers, the full performance of a Brain-Computer Interface (BCI) for a novel user can only be reached by presenting the BCI system with data from the novel user. In typical state-of-the-art BCI systems with a supervised classifier, the labeled data is collected during a calibration recording, in which the user is asked to perform a specific task. Based on the known labels of this recording, the BCI's classifier can learn to decode the individual's brain signals. Unfortunately, this calibration recording consumes valuable time. Furthermore, it is unproductive with respect to the final BCI application, e.g. text entry. Therefore, the calibration period must be reduced to a minimum, which is especially important for patients with a limited concentration ability. The main contribution of this manuscript is an online study on unsupervised learning in an auditory event-related potential (ERP) paradigm. Our results demonstrate that the calibration recording can be bypassed by utilizing an unsupervised trained classifier, that is initialized randomly and updated during usage. Initially, the unsupervised classifier tends to make decoding mistakes, as the classifier might not have seen enough data to build a reliable model. Using a constant re-analysis of the previously spelled symbols, these initially misspelled symbols can be rectified posthoc when the classifier has learned to decode the signals. We compare the spelling performance of our unsupervised approach and of the unsupervised posthoc approach to the standard supervised calibration-based dogma for n = 10 healthy users. To assess the learning behavior of our approach, it is unsupervised trained from scratch three times per user. Even with the relatively low SNR of an auditory ERP paradigm, the results show that after a limited number of trials (30 trials), the unsupervised approach performs comparably to a classic supervised model

    Texture Mixer: A Network for Controllable Synthesis and Interpolation of Texture

    Full text link
    This paper addresses the problem of interpolating visual textures. We formulate this problem by requiring (1) by-example controllability and (2) realistic and smooth interpolation among an arbitrary number of texture samples. To solve it we propose a neural network trained simultaneously on a reconstruction task and a generation task, which can project texture examples onto a latent space where they can be linearly interpolated and projected back onto the image domain, thus ensuring both intuitive control and realistic results. We show our method outperforms a number of baselines according to a comprehensive suite of metrics as well as a user study. We further show several applications based on our technique, which include texture brush, texture dissolve, and animal hybridization.Comment: Accepted to CVPR'1
    • …
    corecore