358,674 research outputs found

    Linking pain and the body: neural correlates of visually induced analgesia

    Get PDF
    The visual context of seeing the body can reduce the experience of acute pain, producing a multisensory analgesia. Here we investigated the neural correlates of this “visually induced analgesia” using fMRI. We induced acute pain with an infrared laser while human participants looked either at their stimulated right hand or at another object. Behavioral results confirmed the expected analgesic effect of seeing the body, while fMRI results revealed an associated reduction of laser-induced activity in ipsilateral primary somatosensory cortex (SI) and contralateral operculoinsular cortex during the visual context of seeing the body. We further identified two known cortical networks activated by sensory stimulation: (1) a set of brain areas consistently activated by painful stimuli (the so-called “pain matrix”), and (2) an extensive set of posterior brain areas activated by the visual perception of the body (“visual body network”). Connectivity analyses via psychophysiological interactions revealed that the visual context of seeing the body increased effective connectivity (i.e., functional coupling) between posterior parietal nodes of the visual body network and the purported pain matrix. Increased connectivity with these posterior parietal nodes was seen for several pain-related regions, including somatosensory area SII, anterior and posterior insula, and anterior cingulate cortex. These findings suggest that visually induced analgesia does not involve an overall reduction of the cortical response elicited by laser stimulation, but is consequent to the interplay between the brain's pain network and a posterior network for body perception, resulting in modulation of the experience of pain

    How we see

    Get PDF
    The visual world is imaged on the retinas of our eyes. However, "seeing"' is not a result of neural functions within the eyes but rather a result of what the brain does with those images. Our visual perceptions are produced by parts of the cerebral cortex dedicated to vision. Although our visual awareness appears unitary, different parts of the cortex analyze color, shape, motion, and depth information. There are also special mechanisms for visual attention, spatial awareness, and the control of actions under visual guidance. Often lesions from stroke or other neurological diseases will impair one of these subsystems, leading to unusual deficits such as the inability to recognize faces, the loss of awareness of half of visual space, or the inability to see motion or color

    Linguistic Illusions

    Get PDF
    The reader is probably familiar with optical illusions, where one doesn\u27t think one sees what in fact one is seeing. For example, the reader may be asked to judge which of two lines is longer. From the way that the two lines are presented, one line does indeed look longer than the other. However, on measuring the lines with a ruler, they are found to be identical in length. This is an optical illusion -- your eyes and brain have tricked you into seeing something that isn\u27t really so

    Asymmetry pays: visual lateralization improves discrimination success in pigeons

    Get PDF
    AbstractFunctional cerebral asymmetries, once thought to be exclusively human, are now accepted to be a widespread principle of brain organization in vertebrates [1]. The prevalence of lateralization makes it likely that it has some major advantage. Until now, however, conclusive evidence has been lacking. To analyze the relation between the extent of cerebral asymmetry and the degree of performance in visual foraging, we studied grain–grit discrimination success in pigeons, a species with a left hemisphere dominance for visual object processing [2,3]. The birds performed the task under left-eye, right-eye or binocular seeing conditions. In most animals, right-eye seeing was superior to left-eye seeing performance, and binocular performance was higher than each monocular level. The absolute difference between left- and right-eye levels was defined as a measure for the degree of visual asymmetry. Animals with higher asymmetries were more successful in discriminating grain from grit under binocular conditions. This shows that an increase in visual asymmetry enhances success in visually guided foraging. Possibly, asymmetries of the pigeon’s visual system increase the computational speed of object recognition processes by concentrating them into one hemisphere while preventing the other side of the brain from initiating conflicting search sequences of its own

    Analysis of the Convolutional Neural Network Model in Detecting Brain Tumor

    Get PDF
    Detecting brain tumors is an active area of research in brain image processing. This paper proposes a methodology to segment and classify brain tumors using magnetic resonance images (MRI). Convolutional Neural Networks (CNN) are one of the effective detection methods and have been employed for tumor segmentation. We optimized the total number of layers and epochs in the model.  First, we run the CNN with 1000 epochs to see its best-optimized number.  Then we consider six models, increasing the number of layers from one to six.  It allows seeing the overfitting according to the number of layers

    KLASIFIKASI POLA IMAGE PADA PASIEN TUMOR OTAK BERBASIS JARINGAN SYARAF TIRUAN ( STUDI KASUS PENANGANAN KURATIF PASIEN TUMOR OTAK )

    Get PDF
     Nowadays medical science has developed rapidly, diagnostic and treatment techniques have provided life expectancy for patients. One way of examining brain tumor sufferers is radiological examination that needs to be done, including MRI with contrast. MRI brain images are useful for seeing tumors in the initial steps of diagnosis and are very good for classification, erosions / destruction lesions of the skull. Smoothing image processing, segmentation with otsu method and feature extraction are carried out to facilitate the training and testing process. This study, will apply texture analysis with the parameters contrast, correlation, energy, homogenity to distinguish the texture of brain tumor images and normal so as to produce a standard gold value based on existing texture characteristics. Training and testing of texture features using backpropagation method of artificial neural networks with variations in learning rate values so that it is expected to obtain a classification of the image conditions of patients with brain tumors. The data used are 29 brain images that produce classification accuracy of 96.55%.Keywords :  MRI images, brain tumors, textur, backprogation

    Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding

    Full text link
    Decoding visual stimuli from brain recordings aims to deepen our understanding of the human visual system and build a solid foundation for bridging human and computer vision through the Brain-Computer Interface. However, reconstructing high-quality images with correct semantics from brain recordings is a challenging problem due to the complex underlying representations of brain signals and the scarcity of data annotations. In this work, we present MinD-Vis: Sparse Masked Brain Modeling with Double-Conditioned Latent Diffusion Model for Human Vision Decoding. Firstly, we learn an effective self-supervised representation of fMRI data using mask modeling in a large latent space inspired by the sparse coding of information in the primary visual cortex. Then by augmenting a latent diffusion model with double-conditioning, we show that MinD-Vis can reconstruct highly plausible images with semantically matching details from brain recordings using very few paired annotations. We benchmarked our model qualitatively and quantitatively; the experimental results indicate that our method outperformed state-of-the-art in both semantic mapping (100-way semantic classification) and generation quality (FID) by 66% and 41% respectively. An exhaustive ablation study was also conducted to analyze our framework.Comment: 8 pages, 9 figures, 2 tables, accepted by CVPR2023, see https://mind-vis.github.io/ for more informatio

    Affordances of distractors and compatibility effects: a study with the computational model TRoPICALS

    Get PDF
    Seeing an object activates in the brain both visual and action codes. Crucial evidence supporting this view is offered by compatibility effect experiments (Ellis et al. (2007). J Exp Psychol: Hum Percept Perform): perception of an object can facilitate or interfere with the execution of an action (e.g. grasping) even when the viewer has no intention of interacting with the object. TRoPICALS (Caligiore et al. (2010). Psychol Rev) is a computational model developed to study compatibility effects. It provides a general hypothesis about the brain mechanisms underlying compatibility effects, suggesting that the top-down bias from prefrontal cortex (PFC), and its agreement or disagreement with the affordances of objects, plays a key role in such phenomena. Compatibility effects have been investigated in the presence of a distractor object in (Ellis et al. (2007). J Exp Psychol: Hum Percept Perform). The reaction times (RTs) results confirmed compatibility effects found in previous experiments without the distractor. Interestingly, results also showed an unexpected effect of the distractor: responding to a target with a grip compatible with the size of the distractor produced slower RTs in comparison to the incompatible case. Here we present an enhanced version of TRoPICALS that reproduces and explains these new results. This explanation is based on the idea according to which PFC might play a double role in its top-down guidance of action selection producing: (a) a positive bias in favor of the action requested by the experimental task; (b) a negative bias directed to inhibiting the action evoked by the distractor. The model also provides two testable predictions on the possible consequences on compatibilities effects of the target and distractor objects in Parkinsonian disease patients with damages of inhibitory circuits
    • …
    corecore