121,203 research outputs found

    Event-related alpha suppression in response to facial motion

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.While biological motion refers to both face and body movements, little is known about the visual perception of facial motion. We therefore examined alpha wave suppression as a reduction in power is thought to reflect visual activity, in addition to attentional reorienting and memory processes. Nineteen neurologically healthy adults were tested on their ability to discriminate between successive facial motion captures. These animations exhibited both rigid and non-rigid facial motion, as well as speech expressions. The structural and surface appearance of these facial animations did not differ, thus participants decisions were based solely on differences in facial movements. Upright, orientation-inverted and luminance-inverted facial stimuli were compared. At occipital and parieto-occipital regions, upright facial motion evoked a transient increase in alpha which was then followed by a significant reduction. This finding is discussed in terms of neural efficiency, gating mechanisms and neural synchronization. Moreover, there was no difference in the amount of alpha suppression evoked by each facial stimulus at occipital regions, suggesting early visual processing remains unaffected by manipulation paradigms. However, upright facial motion evoked greater suppression at parieto-occipital sites, and did so in the shortest latency. Increased activity within this region may reflect higher attentional reorienting to natural facial motion but also involvement of areas associated with the visual control of body effectors. © 2014 Girges et al

    Multisensory Integration Sites Identified by Perception of Spatial Wavelet Filtered Visual Speech Gesture Information

    Get PDF
    Perception of speech is improved when presentation of the audio signal is accompanied by concordant visual speech gesture information. This enhancement is most prevalent when the audio signal is degraded. One potential means by which the brain affords perceptual enhancement is thought to be through the integration of concordant information from multiple sensory channels in a common site of convergence, multisensory integration (MSI) sites. Some studies have identified potential sites in the superior temporal gyrus/sulcus (STG/S) that are responsive to multisensory information from the auditory speech signal and visual speech movement. One limitation of these studies is that they do not control for activity resulting from attentional modulation cued by such things as visual information signaling the onsets and offsets of the acoustic speech signal, as well as activity resulting from MSI of properties of the auditory speech signal with aspects of gross visual motion that are not specific to place of articulation information. This fMRI experiment uses spatial wavelet bandpass filtered Japanese sentences presented with background multispeaker audio noise to discern brain activity reflecting MSI induced by auditory and visual correspondence of place of articulation information that controls for activity resulting from the above-mentioned factors. The experiment consists of a low-frequency (LF) filtered condition containing gross visual motion of the lips, jaw, and head without specific place of articulation information, a midfrequency (MF) filtered condition containing place of articulation information, and an unfiltered (UF) condition. Sites of MSI selectively induced by auditory and visual correspondence of place of articulation information were determined by the presence of activity for both the MF and UF conditions relative to the LF condition. Based on these criteria, sites of MSI were found predominantly in the left middle temporal gyrus (MTG), and the left STG/S (including the auditory cortex). By controlling for additional factors that could also induce greater activity resulting from visual motion information, this study identifies potential MSI sites that we believe are involved with improved speech perception intelligibility

    Audiovisual integration of emotional signals from others' social interactions

    Get PDF
    Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity

    DRIMET: Deep Registration for 3D Incompressible Motion Estimation in Tagged-MRI with Application to the Tongue

    Full text link
    Tagged magnetic resonance imaging (MRI) has been used for decades to observe and quantify the detailed motion of deforming tissue. However, this technique faces several challenges such as tag fading, large motion, long computation times, and difficulties in obtaining diffeomorphic incompressible flow fields. To address these issues, this paper presents a novel unsupervised phase-based 3D motion estimation technique for tagged MRI. We introduce two key innovations. First, we apply a sinusoidal transformation to the harmonic phase input, which enables end-to-end training and avoids the need for phase interpolation. Second, we propose a Jacobian determinant-based learning objective to encourage incompressible flow fields for deforming biological tissues. Our method efficiently estimates 3D motion fields that are accurate, dense, and approximately diffeomorphic and incompressible. The efficacy of the method is assessed using human tongue motion during speech, and includes both healthy controls and patients that have undergone glossectomy. We show that the method outperforms existing approaches, and also exhibits improvements in speed, robustness to tag fading, and large tongue motion.Comment: Accepted to MIDL 2023 (full paper

    Sensitivity to fine-grained and coarse visual information: The effect of blurring on anticipation skill

    Get PDF
    Copyright @ 2009 Edizione l PozziWe examined skilled tennis players’ ability to perceive fine and coarse information by assessing their ability to predict serve direction under three levels of visual blur. A temporal occlusion design was used in which skilled players viewed serves struck by two players that were occluded at one of four points relative to ball-racquet impact (-320ms, -160ms, 0ms, +160ms) and shown with one of three levels of blur (no blur, 20% blur, 40% blur). Using a within-task criterion to establish good and poor anticipators, the results revealed a significant interaction between anticipation skill and level of blur. Anticipation skill was significantly disrupted in the ‘20% blur’ condition; however, judgment accuracy of both groups then improved in the ‘40% blur’ condition while confidence in judgments declined. We conclude that there is evidence for processing of coarse configural information but that anticipation skill in this task was primarily driven by perception of fine-grained information.This research was supported by a University of Hong Kong Seed Funding for Basic Research grant awarded to the second author

    The Complementary Brain: From Brain Dynamics To Conscious Experiences

    Full text link
    How do our brains so effectively achieve adaptive behavior in a changing world? Evidence is reviewed that brains are organized into parallel processing streams with complementary properties. Hierarchical interactions within each stream and parallel interactions between streams create coherent behavioral representations that overcome the complementary deficiencies of each stream and support unitary conscious experiences. This perspective suggests how brain design reflects the organization of the physical world with which brains interact, and suggests an alternative to the computer metaphor suggesting that brains are organized into independent modules. Examples from perception, learning, cognition, and action are described, and theoretical concepts and mechanisms by which complementarity is accomplished are summarized.Defense Advanced Research Projects and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (ITI-97-20333); Office of Naval Research (N00014-95-1-0657

    Event segmentation and biological motion perception in watching dance

    Get PDF
    We used a combination of behavioral, computational vision and fMRI methods to examine human brain activity while viewing a 386 s video of a solo Bharatanatyam dance. A computational analysis provided us with a Motion Index (MI) quantifying the silhouette motion of the dancer throughout the dance. A behavioral analysis using 30 naĂŻve observers provided us with the time points where observers were most likely to report event boundaries where one movement segment ended and another began. These behavioral and computational data were used to interpret the brain activity of a different set of 11 naĂŻve observers who viewed the dance video while brain activity was measured using fMRI. Results showed that the Motion Index related to brain activity in a single cluster in the right Inferior Temporal Gyrus (ITG) in the vicinity of the Extrastriate Body Area (EBA). Perception of event boundaries in the video was related to the BA44 region of right Inferior Frontal Gyrus as well as extensive clusters of bilateral activity in the Inferior Occipital Gyrus which extended in the right hemisphere towards the posterior Superior Temporal Sulcus (pSTS)

    Theories of developmental dyslexia: Insights from a multiple case study of dyslexic adults

    Get PDF
    A multiple case study was conducted in order to assess three leading theories of developmental dyslexia: the phonological, the magnocellular (auditory and visual) and the cerebellar theories. Sixteen dyslexic and 16 control university students were administered a full battery of psychometric, phonological, auditory, visual and cerebellar tests. Individual data reveal that all 16 dyslexics suffer from a phonological deficit, 10 from an auditory deficit, 4 from a motor deficit, and 2 from a visual magnocellular deficit. Results suggest that a phonological deficit can appear in the absence of any other sensory or motor disorder, and is sufficient to cause a literacy impairment, as demonstrated by 5 of the dyslexics. Auditory disorders, when present, aggravate the phonological deficit, hence the literacy impairment. However, auditory deficits cannot be characterised simply as rapid auditory processing problems, as would be predicted by the magnocellular theory. Nor are they restricted to speech. Contrary to the cerebellar theory, we find little support for the notion that motor impairments, when found, have a cerebellar origin, or reflect an automaticity deficit. Overall, the present data support the phonological theory of dyslexia, while acknowledging the presence of additional sensory and motor disorders in certain individuals

    Developmental dyslexia: specific phonological deficit or general sensorimotor dysfunction?

    Get PDF
    Dyslexia research is now facing an intriguing paradox: it is becoming increasingly clear that a significant proportion of dyslexics present sensory and motor deficits; however, as this “sensorimotor syndrome” is being studied in greater detail, it is also becoming increasingly clear that sensory and motor deficits will play only a limited role in a general causal explanation of specific reading disability
    • 

    corecore