902 research outputs found

    Characterizing object- and position-dependent response profiles to uni- and bilateral stimulus configurations in human higher visual cortex:a 7T fMRI study

    Get PDF
    Visual scenes are initially processed via segregated neural pathways dedicated to either of the two visual hemifields. Although higher-order visual areas are generally believed to utilize invariant object representations (abstracted away from features such as stimulus position), recent findings suggest they retain more spatial information than previously thought. Here, we assessed the nature of such higher-order object representations in human cortex using high-resolution fMRI at 7T, supported by corroborative 3T data. We show that multi-voxel activation patterns in both the contra- and ipsilateral hemisphere can be exploited to successfully classify the object category of unilaterally presented stimuli. Moreover, robustly identified rank order-based response profiles demonstrated a strong contralateral bias which frequently outweighed object category preferences. Finally, we contrasted different combinatorial operations to predict the responses during bilateral stimulation conditions based on responses to their constituent unilateral elements. Results favored a max operation predominantly reflecting the contralateral stimuli. The current findings extend previous work by showing that configuration-dependent modulations in higher-order visual cortex responses as observed in single unit activity have a counterpart in human neural population coding. They furthermore corroborate the emerging view that position coding is a fundamental functional characteristic of ventral visual stream processing

    Contextual Encoder-Decoder Network for Visual Saliency Prediction

    Get PDF
    Predicting salient regions in natural images requires the detection of objects that are present in a scene. To develop robust representations for this challenging task, high-level visual features at multiple spatial scales must be extracted and augmented with contextual information. However, existing models aimed at explaining human fixation maps do not incorporate such a mechanism explicitly. Here we propose an approach based on a convolutional neural network pre-trained on a large-scale image classification task. The architecture forms an encoder-decoder structure and includes a module with multiple convolutional layers at different dilation rates to capture multi-scale features in parallel. Moreover, we combine the resulting representations with global scene information for accurately predicting visual saliency. Our model achieves competitive and consistent results across multiple evaluation metrics on two public saliency benchmarks and we demonstrate the effectiveness of the suggested approach on five datasets and selected examples. Compared to state of the art approaches, the network is based on a lightweight image classification backbone and hence presents a suitable choice for applications with limited computational resources, such as (virtual) robotic systems, to estimate human fixations across complex natural scenes.Comment: Accepted Manuscrip

    Holistic Face Categorization in Higher Order Visual Areas of the Normal and Prosopagnosic Brain: Toward a Non-Hierarchical View of Face Perception

    Get PDF
    How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex

    A multiplex connectivity map of valence-arousal emotional model

    Get PDF
    high number of studies have already demonstrated an electroencephalography (EEG)-based emotion recognition system with moderate results. Emotions are classified into discrete and dimensional models. We focused on the latter that incorporates valence and arousal dimensions. The mainstream methodology is the extraction of univariate measures derived from EEG activity from various frequencies classifying trials into low/high valence and arousal levels. Here, we evaluated brain connectivity within and between brain frequencies under the multiplexity framework. We analyzed an EEG database called DEAP that contains EEG responses to video stimuli and users’ emotional self-assessments. We adopted a dynamic functional connectivity analysis under the notion of our dominant coupling model (DoCM). DoCM detects the dominant coupling mode per pair of EEG sensors, which can be either within frequencies coupling (intra) or between frequencies coupling (cross-frequency). DoCM revealed an integrated dynamic functional connectivity graph (IDFCG) that keeps both the strength and the preferred dominant coupling mode. We aimed to create a connectomic mapping of valence-arousal map via employing features derive from IDFCG. Our results outperformed previous findings succeeding to predict in a high accuracy participants’ ratings in valence and arousal dimensions based on a flexibility index of dominant coupling modes

    Playing Charades in the fMRI: Are Mirror and/or Mentalizing Areas Involved in Gestural Communication?

    Get PDF
    Communication is an important aspect of human life, allowing us to powerfully coordinate our behaviour with that of others. Boiled down to its mere essentials, communication entails transferring a mental content from one brain to another. Spoken language obviously plays an important role in communication between human individuals. Manual gestures however often aid the semantic interpretation of the spoken message, and gestures may have played a central role in the earlier evolution of communication. Here we used the social game of charades to investigate the neural basis of gestural communication by having participants produce and interpret meaningful gestures while their brain activity was measured using functional magnetic resonance imaging. While participants decoded observed gestures, the putative mirror neuron system (pMNS: premotor, parietal and posterior mid-temporal cortex), associated with motor simulation, and the temporo-parietal junction (TPJ), associated with mentalizing and agency attribution, were significantly recruited. Of these areas only the pMNS was recruited during the production of gestures. This suggests that gestural communication relies on a combination of simulation and, during decoding, mentalizing/agency attribution brain areas. Comparing the decoding of gestures with a condition in which participants viewed the same gestures with an instruction not to interpret the gestures showed that although parts of the pMNS responded more strongly during active decoding, most of the pMNS and the TPJ did not show such significant task effects. This suggests that the mere observation of gestures recruits most of the system involved in voluntary interpretation

    The Constructive Nature of Affective Vision: Seeing Fearful Scenes Activates Extrastriate Body Area

    Get PDF
    It is part of basic emotions like fear or anger that they prepare the brain to act adaptively. Hence scenes representing emotional events are normally associated with characteristic adaptive behavior. Normally, face and body representation areas in the brain are modulated by these emotions when presented in the face or body. Here, we provide neuroimaging evidence (using functional magnetic resonance imaging) that the extrastriate body area (EBA) is highly responsive when subjects observe isolated faces presented in emotional scenes. This response of EBA to threatening scenes in which no body is present gives rise to speculation about its function. We discuss the possibility that the brain reacts proactively to the emotional meaning of the scene
    corecore