4,610 research outputs found

    The Effect of Looming and Receding Sounds on the Perceived In-Depth Orientation of Depth-Ambiguous Biological Motion Figures

    Get PDF
    BACKGROUND: The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds. CONCLUSIONS/SIGNIFICANCE: The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws

    Processing of rhythmical acoustic patterns in the domestic chicks. A behavioral exploration

    Get PDF
    The spontaneous tendency to synchronize with a\ua0musical\ua0beat is\ua0a human universal. Recently, it has been convincingly observed also in some non-human species [1-4]. However, why synchronization ability would be present in animals is still not clear. One possibility is that synchronized behavior may have been shaped by evolution because of the predictability of rhythmic locomotion sounds [5].\ua0 In humans, organisms\u2019 locomotion is encoded either by listening to the sound of rhythmic footsteps [6], or by the visual analysis of rhythmically walking animals described by simple point-light displays [7]. Such visual point-light displays are recognized also by non-human animals as biologically-relevant stimuli [8]. Hence, raw mechanisms for visual recognition of living organisms, available at birth and shared across species [9], could be accompanied by\ua0universal acoustic building blocks of sounds of moving animals. To address this possibility, we presented 50 chicks (Gallus gallus) with rhythmic and a-rhythmic\ua0acoustic patterns of either 120BPM\ua0or 80BPM. In a circular semi-dark environment, 4 symmetrical speakers delivered sequentially, in circular transition, the stimuli. Chicks responded to rhythmic and a-rhythmic acoustic patterns in a comparable fashion, by following\ua0the circular presentation of the 120BPM acoustic patterns but not that of 80BPM. This result is in line with chicks\u2019 spontaneous preference for normal rate of maternal clucking at about 120-130BPM [10] meaning that faster rhythmic and a-rhythmic patterns are both associated with recognition of living organism. In a separate condition, chicks placed within the same experimental environment could listen to a continuous modulated sound. We observed a total diminution in motor activity. In the absence of pauses or accents defining an acoustic structure, chicks do not identify the presence of an organism that is worth following

    Neuronal bases of structural coherence in contemporary dance observation

    Get PDF
    The neuronal processes underlying dance observation have been the focus of an increasing number of brain imaging studies over the past decade. However, the existing literature mainly dealt with effects of motor and visual expertise, whereas the neural and cognitive mechanisms that underlie the interpretation of dance choreographies remained unexplored. Hence, much attention has been given to the Action Observation Network (AON) whereas the role of other potentially relevant neuro-cognitive mechanisms such as mentalizing (theory of mind) or language (narrative comprehension) in dance understanding is yet to be elucidated. We report the results of an fMRI study where the structural coherence of short contemporary dance choreographies was manipulated parametrically using the same taped movement material. Our participants were all trained dancers. The whole-brain analysis argues that the interpretation of structurally coherent dance phrases involves a subpart (Superior Parietal) of the AON as well as mentalizing regions in the dorsomedial Prefrontal Cortex. An ROI analysis based on a similar study using linguistic materials (Pallier et al. 2011) suggests that structural processing in language and dance might share certain neural mechanisms

    Interactions between Auditory and Visual Semantic Stimulus Classes: Evidence for Common Processing Networks for Speech and Body Actions

    Get PDF
    Incongruencies between auditory and visual signals negatively affect human performance and cause selective activation in neuroimaging studies; therefore, they are increasingly used to probe audiovisual integration mechanisms. An open question is whether the increased BOLD response reflects computational demands in integrating mismatching low-level signals or reflects simultaneous unimodal conceptual representations of the competing signals. To address this question, we explore the effect of semantic congruency within and across three signal categories (speech, body actions, and unfamiliar patterns) for signals with matched low-level statistics. In a localizer experiment, unimodal (auditory and visual) and bimodal stimuli were used to identify ROIs. All three semantic categories cause overlapping activation patterns. We find no evidence for areas that show greater BOLD response to bimodal stimuli than predicted by the sum of the two unimodal responses. Conjunction analysis of the unimodal responses in each category identifies a network including posterior temporal, inferior frontal, and premotor areas. Semantic congruency effects are measured in the main experiment. We find that incongruent combinations of two meaningful stimuli (speech and body actions) but not combinations of meaningful with meaningless stimuli lead to increased BOLD response in the posterior STS (pSTS) bilaterally, the left SMA, the inferior frontal gyrus, the inferior parietal lobule, and the anterior insula. These interactions are not seen in premotor areas. Our findings are consistent with the hypothesis that pSTS and frontal areas form a recognition network that combines sensory categorical representations (in pSTS) with action hypothesis generation in inferior frontal gyrus/premotor areas. We argue that the same neural networks process speech and body actions

    Affective iconic words benefit from additional sound–meaning integration in the left amygdala

    Get PDF
    Recent studies have shown that a similarity between sound and meaning of a word (i.e., iconicity) can help more readily access the meaning of that word, but the neural mechanisms underlying this beneficial role of iconicity in semantic processing remain largely unknown. In an fMRI study, we focused on the affective domain and examined whether affective iconic words (e.g., high arousal in both sound and meaning) activate additional brain regions that integrate emotional information from different domains (i.e., sound and meaning). In line with our hypothesis, affective iconic words, compared to their non‐iconic counterparts, elicited additional BOLD responses in the left amygdala known for its role in multimodal representation of emotions. Functional connectivity analyses revealed that the observed amygdalar activity was modulated by an interaction of iconic condition and activations in two hubs representative for processing sound (left superior temporal gyrus) and meaning (left inferior frontal gyrus) of words. These results provide a neural explanation for the facilitative role of iconicity in language processing and indicate that language users are sensitive to the interaction between sound and meaning aspect of words, suggesting the existence of iconicity as a general property of human language

    Translating novel findings of perceptual-motor codes into the neuro-rehabilitation of movement disorders

    Get PDF
    The bidirectional flow of perceptual and motor information has recently proven useful as rehabilitative tool for re-building motor memories. We analyzed how the visual-motor approach has been successfully applied in neurorehabilitation, leading to surprisingly rapid and effective improvements in action execution. We proposed that the contribution of multiple sensory channels during treatment enables individuals to predict and optimize motor behavior, having a greater effect than visual input alone. We explored how the state-of-the-art neuroscience techniques show direct evidence that employment of visual-motor approach leads to increased motor cortex excitability and synaptic and cortical map plasticity. This super-additive response to multimodal stimulation may maximize neural plasticity, potentiating the effect of conventional treatment, and will be a valuable approach when it comes to advances in innovative methodologies

    The benefit of multisensory integration with biological motion signals

    Get PDF
    Assessing intentions, direction, and velocity of others is necessary for most daily tasks, and such information is often made available by both visual and auditory motion cues. Therefore, it is not surprising our great ability to perceive human motion. Here, we explore the multisensory integration of cues of biological motion walking speed. After testing for audiovisual asynchronies (visual signals led auditory ones by 30ms in simultaneity temporal windows of 76.4ms), in the main experiment, visual, auditory, and bimodal stimuli were compared to a standard audiovisual walker in a velocity discrimination task. Results in variance reduction conformed to optimal integration of congruent bimodal stimuli across all subjects. Interestingly, the perceptual judgments were still close to optimal for stimuli at the smallest level of incongruence. Comparison of slopes allows us to estimate an integration window of about 60ms, which is smaller than that reported in audiovisual speech.This work was partly funded by the Portuguese Foundation for Science and Technology (SFRH/BD/36345/2007, PTDC/SAU-BEB/68455/2006, SFRH/BSAB/974/2009) and the Portugal-Spain Actions PT2009-0186 from the Spanish Government and E-134/10 from the Portuguese Conselho de Reitores das Universidades Portuguesas

    Consciousness operates beyond the timescale for discerning time intervals: implications for Q-mind theories and analysis of quantum decoherence in brain

    Get PDF
    This paper presents in details how the subjective time is constructed by the brain cortex via reading packets of information called "time labels", produced by the right basal ganglia that act as brain timekeeper. Psychophysiological experiments have measured the subjective "time quanta" to be 40 ms and show that consciousness operates beyond that scale - an important result having profound implications for the Q-mind theory. Although in most current mainstream biophysics research on cognitive processes, the brain is modelled as a neural network obeying classical physics, Penrose (1989, 1997) and others have argued that quantum mechanics may play an essential role, and that successful brain simulations can only be performed with a quantum computer. Tegmark (2000) showed that make-or-break issue for the quantum models of mind is whether the relevant degrees of freedom of the brain can be sufficiently isolated to retain their quantum coherence and tried to settle the issue with detailed calculations of the relevant decoherence rates. He concluded that the mind is classical rather than quantum system, however his reasoning is based on biological inconsistency. Here we present detailed exposition of molecular neurobiology and define the dynamical timescale of cognitive processes linked to consciousness to be 10-15 ps showing that macroscopic quantum coherent phenomena in brain are not ruled out, and even may provide insight in understanding life, information and consciousness

    Social perception and cognition : processing of gestures, postures and facial expressions in the human brain

    Get PDF
    Humans are a social species with the internal capability to process social information from other humans. To understand others behavior and to react accordingly, it is necessary to infer their internal states, emotions and aims, which are conveyed by subtle nonverbal bodily cues such as postures, gestures, and facial expressions. This thesis investigates the brain functions underlying the processing of such social information. Studies I and II of this thesis explore the neural basis of perceiving pain from another person s facial expressions by means of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). In Study I, observing another s facial expression of pain activated the affective pain system (previously associated with self-experienced pain) in accordance with the intensity of the observed expression. The strength of the response in anterior insula was also linked to the observer s empathic abilities. The cortical processing of facial pain expressions advanced from the visual to temporal-lobe areas at similar latencies (around 300 500 ms) to those previously shown for emotional expressions such as fear or disgust. Study III shows that perceiving a yawning face is associated with middle and posterior STS activity, and the contagiousness of a yawn correlates negatively with amygdalar activity. Study IV explored the brain correlates of interpreting social interaction between two members of the same species, in this case human and canine. Observing interaction engaged brain activity in very similar manner for both species. Moreover, the body and object sensitive brain areas of dog experts differentiated interaction from noninteraction in both humans and dogs whereas in the control subjects, similar differentiation occurred only for humans. Finally, Study V shows the engagement of the brain area associated with biological motion when exposed to the sounds produced by a single human being walking. However, more complex pattern of activation, with the walking sounds of several persons, suggests that as the social situation becomes more complex so does the brain response. Taken together, these studies demonstrate the roles of distinct cortical and subcortical brain regions in the perception and sharing of others internal states via facial and bodily gestures, and the connection of brain responses to behavioral attributes.Ihminen on sosiaalinen laji, ja meillÀ on myös kanssaihmistemme vÀlittÀmÀn sosiaalisen informaation kÀsittelyyn erikoistuneita aivomekanismeja. YmmÀrtÀÀksemme muiden kÀyttÀytymistÀ ja vastataksemme siihen tarkoituksenmukaisesti, meidÀn tÀytyy ymmÀrtÀÀ muiden ihmisten hienovaraisen kehonkielen kuten eleiden tai kasvonilmeiden vÀlittÀmiÀ tunnetiloja ja pÀÀmÀÀriÀ. TÀssÀ vÀitöskirjatyössÀ tutkittiin tÀllaisen sosiaalisen informaation kÀsittelyÀ aivoissa. VÀitöskirja tarkastelee aivotoimintaa toisten ihmisten tunnetilojen havainnoinnissa kasvojen ja kehon eleiden kautta sekÀ nÀiden aivovasteiden yhteyttÀ kÀyttÀytymiseen. OsatöissÀ I ja II tarkasteltiin toisen ihmisen kipukokemuksen havaitsemista kasvonilmeistÀ toiminnallisen magneettikuvauksen (fMRI) ja magnetoenkefalografian (MEG) avulla. Tutkimuksissa selvisi, ettÀ toisen ihmisen kivun kasvonilmettÀ katsottaessa ne aivoalueet, jotka osallistuvat myös itse koettuun kipuun, aktivoituivat sitÀ voimakkaammin, mitÀ voimakkaampaa kipua kasvonilmeen arveltiin vÀlittÀvÀn. Aivoaktivaatio oli myös yhteydessÀ katselijan empatiakykyihin. Kipuilmeiden kÀsittely eteni nÀköaivokuorelta ohimolohkon alueille samassa ajassa kuin on aikaisemmin osoitettu pelon ja inhon ilmeille (noin 300 500 ms). OsatyössÀ III osoitettiin, ettÀ myös haukottelevien kasvojen havaitseminen aktivoi ohimolohkon alueita. Tulokset osoittivat myös, ettÀ mitÀ heikompaa mantelitumakkeen aktivaatio oli havainnon aikana, sitÀ enemmÀn koehenkilö tunsi tarvetta haukotella itse katsellessaan haukottelevia kasvoja. OsatyössÀ IV tutkittiin vuorovaikutuksen havaitsemista kahden ihmisen tai kahden koiran sosiaalisista eleistÀ. Kummankin lajin vuorovaikutuseleiden katselu aktivoi aivoja samankaltaisesti, mutta koirien elekieleen perehtyneiden asiantuntijoiden aivovasteet kehon ja muiden havaintokohteiden kÀsittelyyn erikoistuneilla alueilla erottelivat koirien vuorovaikutustilanteet ei-vuorovaikutteisista tilanteista samaan tapaan kuin ihmisten vÀliset vastaavat tilanteet. Sen sijaan kontrollikoehenkilöiden aivovasteet erottelivat samalla tavalla vain ihmisten vuorovaikutuksen. OsatyössÀ V osoitettiin, ettÀ biologisen liikkeen havaitsemiseen erikoistunut aivoalue aktivoituu yhden ihmisen kÀvelyÀÀniÀ kuunnellessa, mutta aktivaatiokuvio leviÀÀ kuunneltaessa usean ihmisen kÀvelyÀÀniÀ, mikÀ viittaa aivovasteiden monimutkaistumiseen riippuen sosiaalisesta ympÀristöstÀ

    Perception of biological motion by form analysis

    Get PDF
    Detection of other living beings’ movements is a fundamental property of the human visual system. Viewing their movements, categorizing their actions, and interpreting social behaviors like gestures constitutes a framework of our everyday lives. These observed actions are complex and differences among them are rather subtle. However, humans recognize these actions without ma jor efforts and without being aware of the complexity of the observed tasks. In point-light walkers, the visual information about the human body is reduced to only a handful point-lights placed on the ma jor joints of the otherwise invisible body. But even this sparse information does not effectively reduce humans’ abilities to perceive the performed actions. Neurophysiological and neuroimaging studies suggested that the movement of the human body is represented in specific brain areas. Nonetheless, the underlying network is still issue of controversial discussion. To investigate the role of form information, I developed a model and conducted psychophysical experiments using point-light walkers. A widely accepted theory claims that in point-light walkers, form information is decreased to a non-usable minimum and, thus, the perception of biological motion is driven by the analysis of motion signals. In my study, I could show that point-light walker indeed contain useful form information. Moreover, I could show that temporal integration of this information is sufficient to explain results from psychophysical, neurophysiological, and neuroimaging studies. In opposition to the standard models of biological motion perception, I could also show that all results can be explained without the analysis of local motion signals
    • 

    corecore