369 research outputs found

    How Haptic Size Sensations Improve Distance Perception

    Get PDF
    Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual “posterior sampling”. In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information.National Institutes of Health (U.S.) (NIH grant R01EY015261)University of Minnesota (UMN Graduate School Fellowship)National Science Foundation (U.S.) (Graduate Research Fellowship)University of Minnesota (UMN Doctoral Dissertation Fellowship)National Institutes of Health (U.S.) (NIH NRSA grant F32EY019228-02)Ruth L. Kirschstein National Research Service Awar

    Spatial and temporal (non)binding of audiovisual rhythms in sensorimotor synchronisation

    Get PDF
    All data are held in a public repository, available at OSF database (URL access: https://osf.io/2jr48/?view_only=17e3f6f57651418c980832e00d818072).Human movement synchronisation with moving objects strongly relies on visual input. However, auditory information also plays an important role, since real environments are intrinsically multimodal. We used electroencephalography (EEG) frequency tagging to investigate the selective neural processing and integration of visual and auditory information during motor tracking and tested the effects of spatial and temporal congruency between audiovisual modalities. EEG was recorded while participants tracked with their index finger a red flickering (rate fV = 15 Hz) dot oscillating horizontally on a screen. The simultaneous auditory stimulus was modulated in pitch (rate fA = 32 Hz) and lateralised between left and right audio channels to induce perception of a periodic displacement of the sound source. Audiovisual congruency was manipulated in terms of space in Experiment 1 (no motion, same direction or opposite direction), and timing in Experiment 2 (no delay, medium delay or large delay). For both experiments, significant EEG responses were elicited at fV and fA tagging frequencies. It was also hypothesised that intermodulation products corresponding to the nonlinear integration of visual and auditory stimuli at frequencies fV ± fA would be elicited, due to audiovisual integration, especially in Congruent conditions. However, these components were not observed. Moreover, synchronisation and EEG results were not influenced by congruency manipulations, which invites further exploration of the conditions which may modulate audiovisual processing and the motor tracking of moving objects.We thank Ashleigh Clibborn and Ayah Hammoud for their assistance with data collection. This work was supported by a grant from the Australian Research Council (DP170104322, DP220103047). OML is supported by the Portuguese Foundation for Science and Technology and the Portuguese Ministry of Science, Technology and Higher Education, through the national funds, within the scope of the Transitory Disposition of the Decree No. 57/2016, of 29 August, amended by Law No. 57/2017 of 19 July (Ref.: SFRH/BPD/72710/2010

    Language Factors Modulate Audiovisual Speech Perception. A Developmental Perspective

    Get PDF
    [eng] In most natural situations, adults look at the eyes of faces in seek of social information (Yarbus, 1967). However, when the auditory information becomes unclear (e.g. speech-in- noise) they switch their attention towards the mouth of a talking face and rely on the audiovisual redundant cues to help them process the speech signal (Barenholtz, Mavica, & Lewkowicz, 2016; Buchan, ParĂ©, & Munhall, 2007; Lansing & McConkie, 2003; Vatikiotis- Bateson, Eigsti, Yano, & Munhall, 1998). Likewise, young infants are sensitive to the correspondence between acoustic and visual speech (Bahrick & Lickliter, 2012), and they also rely on the talker’s mouth during the second half of the first year of life, putatively to help them acquire language by the time they start babbling (Lewkowicz & Hansen-Tift, 2012), and also to aid language differentiation in the case of bilingual infants (Pons, Bosch & Lewkowicz, 2015). The current set of studies provides a detailed examination of the audiovisual (AV) speech cues contribution to speech processing at different language development stages, through the analysis of selective attention patterns when processing speech from talking faces. To do so, I compared different linguistic experience factors (i.e. types of bilingualism – distance between bilinguals’ two languages –, language familiarity and language proficiency) that modulate audiovisual speech perception in first language acquisition during infancy (Studies 1 and 2), early childhood (Studies 3 and 4), and in second language (L2) learning during adulthood (Studies 5, 6 and 7). The findings of the present work demonstrate that (1) perceiving speech audiovisually hampers close bilingual infants’ ability to discriminate their languages, that (2) 15-month-old and 5 year-old close language bilinguals rely more on the mouth cues of a talking face than do their distant bilingual peers, that (3) children’s attention to the mouth follows a clear temporal pattern: it is maximal in the beginning of the presentation and it diminishes gradually as speech continues, and that (4) adults also rely more on the mouth speech cues when they perceive fluent non-native vs. native speech, regardless of their L2 expertise. All in all, these studies shed new light into the field of audiovisual speech perception and language processing by showing that selective attention to a talker’s eyes and mouth is a dynamic, information-seeking process, which is largely modulated by perceivers’ early linguistic experience and the tasks’ demands. These results suggest that selectively attending the redundant speech cues of a talker’s mouth at the adequate moment enhances speech perception and is crucial for normal language development and speech processing, not only in infancy – during first language acquisition – but also in more advanced language stages in childhood, as well as in L2 learning during adulthood. Ultimately, they confirm that mouth reliance is greater in close bilingual environments, where the presence of two related languages increases the necessity for disambiguation and keeping separate language systems.[cat] Atendre selectivament a la boca d’un parlant ens ajuda a beneficiar-nos de la informaciĂł audiovisual i processar millor el senyal de la parla, quan el senyal auditiu es torna confĂșs. Paral·lelament, els infants tambĂ© atenen a la boca durant la segona meitat del primer any de vida, la qual cosa els ajuda en l'adquisiciĂł del llenguatge/s. Aquesta tesi examina la contribuciĂł del senyal audiovisual al processament de la parla, a travĂ©s de les anĂ lisis d'atenciĂł selectiva a una cara parlant. Es comparen diferents factors lingĂŒĂ­stics (tipologies de bilingĂŒisme, la familiaritat i la competĂšncia amb l'idioma) que modulen la percepciĂł audiovisual de la parla en l'adquisiciĂł del llenguatge durant la primera infĂ ncia (Estudis 1 i 2), en nens d’edat escolar (Estudis 3 i 4) i l’aprenentatge d'una segona llengua durant l'edat adulta (Estudis 5, 6 i 7). Els resultats demostren que (1) la percepciĂł audiovisual de la parla dificulta la capacitat dels infants bilingĂŒes de discriminar les seves llengĂŒes properes, que (2) els bilingĂŒes de llengĂŒes properes de 15 mesos i de 5 anys d’edat posen mĂ©s atenciĂł a les pistes audiovisuals de la boca que els bilingĂŒes de llengĂŒes distants, que (3) l’atenciĂł dels nens a la boca del parlant Ă©s mĂ xima al començament i disminueix gradualment a mesura que continua la parla, i que (4) els adults tambĂ© es recolzen mĂ©s en els senyals audiovisuals de la boca quan perceben una llengua no nativa (L2), independentment de la seva competĂšncia en aquesta. Aquests estudis demostren que l'atenciĂł selectiva a la cara d'un parlant Ă©s un procĂ©s dinĂ mic i de cerca d'informaciĂł, i que aquest Ă©s modulat per l'experiĂšncia lingĂŒĂ­stica primerenca i les exigĂšncies que comporten les situacions comunicatives. Aquests resultats suggereixen que atendre a les pistes audiovisuals de la boca en els moments adequats Ă©s crucial per al desenvolupament normal del llenguatge, tan durant la primera infĂ ncia com en les etapes mĂ©s avançades del llenguatge, aixĂ­ com en l’aprenentatge de segones llengĂŒes. Per Ășltim, aquests resultats confirmen que l’estratĂšgia de recolzar-se en les pistes audiovisuals s’utilitza en major mesura en entorns bilingĂŒes propers, on la presĂšncia de dues llengĂŒes relacionades augmenta la necessitat de desambiguaciĂł

    On the functions, mechanisms, and malfunctions of intracortical contextual modulation

    Get PDF
    A broad neuron-centric conception of contextual modulation is reviewed and re-assessed in the light of recent neurobiological studies of amplification, suppression, and synchronization. Behavioural and computational studies of perceptual and higher cognitive functions that depend on these processes are outlined, and evidence that those functions and their neuronal mechanisms are impaired in schizophrenia is summarized. Finally, we compare and assess the long-term biological functions of contextual modulation at the level of computational theory as formalized by the theories of coherent infomax and free energy reduction. We conclude that those theories, together with the many empirical findings reviewed, show how contextual modulation at the neuronal level enables the cortex to flexibly adapt the use of its knowledge to current circumstances by amplifying and grouping relevant activities and by suppressing irrelevant activities

    Gravity as a Strong Prior: Implications for Perception and Action

    Get PDF
    In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality), or visually and bodily (space travel). As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called "strong prior". As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities

    Feeling and Speaking: The Role of Sensory Feedback in Speech

    Get PDF
    Sensory feedback allows talkers to accurately control speech production, and auditory information is the predominant form of speech feedback. When this sensory stream is degraded, talkers have been shown to rely more heavily on somatosensory information. Furthermore, perceptual speech abilities are greatest when both auditory and visual feedback are available. In this study, we experimentally degraded auditory feedback using a cochlear implant simulation and somatosensory feedback using Orajel. Additionally, we placed a mirror in front of the talkers to introduce visual feedback. Participants were prompted to speak under a baseline, feedback degraded, and visual condition; audiovisual speech recordings were taken for each treatment. These recordings were then used in a playback study to determine the intelligibility of speech. Acoustically, baseline speech was selected as “easier to understand” significantly more often than speech from either the feedback degraded or visual condition. Visually, speech from the visual condition was selected as “easier to understand” significantly less often than speech from the feedback degraded condition. Listener preference of baseline speech was significantly greater when both auditory and somatosensory feedback were degraded then when only auditory feedback was degraded (Casserly, in prep., 2015). These results suggest that feedback was successfully degraded and that the addition of visual feedback decreased speech intelligibility

    Variance based weighting of multisensory head rotation signals for verticality perception

    Get PDF
    We tested the hypothesis that the brain uses a variance-based weighting of multisensory cues to estimate head rotation to perceive which way is up. The hypothesis predicts that the known bias in perceived vertical, which occurs when the visual environment is rotated in a vertical-plane, will be reduced by the addition of visual noise. Ten healthy participants sat head-fixed in front of a vertical screen presenting an annulus filled with coloured dots, which could rotate clockwise or counter-clockwise at six angular velocities (1, 2, 4, 6, 8, 16°/s) and with six levels of noise (0, 25, 50, 60, 75, 80%). Participants were required to keep a central bar vertical by rotating a hand-held dial. Continuous adjustments of the bar were required to counteract low-amplitude low-frequency noise that was added to the bar's angular position. During visual rotation, the bias in verticality perception increased over time to reach an asymptotic value. Increases in visual rotation velocity significantly increased this bias, while the addition of visual noise significantly reduced it, but did not affect perception of visual rotation velocity. The biasing phenomena were reproduced by a model that uses a multisensory variance-weighted estimate of head rotation velocity combined with a gravito-inertial acceleration signal (GIA) from the vestibular otoliths. The time-dependent asymptotic behaviour depends on internal feedback loops that act to pull the brain's estimate of gravity direction towards the GIA signal. The model's prediction of our experimental data furthers our understanding of the neural processes underlying human verticality perception

    Neural Basis of Social and Perceptual Decision-making in Humans

    Get PDF
    We make decisions in every moment of our lives. How the brain forms those decisions has been an active topic of inquiry in the field of brain science in recent years. In this dissertation, I discuss our recent neuroimaging studies in trying to uncover the functional architecture of the human brain during social and perceptual decision-making processes. Our decisions in social context vary tremendously with many factors including emotion, reward, social norms, treatments from others, cooperation, and dependence to others. We studied the neural basis of social decision-making processes with a functional magnetic resonance imaging (fMRI) experiment using three economic exchange games with undercompensating, nearly equal, and overcompensating offers. Refusals of undercompensating offers recruited the right dorsolateral prefrontal cortex (dlPFC). Accepting of overcompensating offers recruited the brain reward pathway consisting of the caudate, the cingulate cortex, and the thalamus. Protesting of decisions activated the network consisting of the right dlPFC, the left ventrolateral prefrontal cortex, and midbrain in the substantia nigra. These findings suggested that social decisions are the results of coordination between evaluated fairness norms, self-interest, and reward. In the topic of perceptual decision-making, we contributed to answering how diverse cortical structures are involved in relaying and processing of sensory information to make a sense of environment around us. We conducted two fMRI experiments. In the first experiment, we used an audio-visual (AV) synchrony and asynchrony perceptual categorization task. In the second experiment, we used a face-house categorization task. Stimuli in the second experiment included three levels of noise in face and house images. In AV, we investigated the effective connectivity within the salience network consisting of the anterior insulae and anterior cingulate cortex. In face-house, we discovered that the BOLD activity in the dlPFC, the bidirectional connectivity between the fusiform face area (FFA) and the parahippocampal place area (PPA), and the feedforward connectivity from these regions to the dlPFC increased with the noise level – thus with difficulty of decision-making. These results support that the FFA-PPA-dlPFC network plays an important role for relaying and integrating competing sensory information to arrive at perceptual decisions of face and house

    Do Visual and Vestibular Inputs Compensate for Somatosensory Loss in the Perception of Spatial Orientation? Insights from a Deafferented Patient

    Get PDF
    Bringoux L, Scotto di Cesare C, Borel L, Macaluso T, Sarlegna FR. Do Visual and Vestibular Inputs Compensate for Somatosensory Loss in the Perception of Spatial Orientation? Insights from a Deafferented Patient. Frontiers in Human Neuroscience. 2016;10: 181.The present study aimed at investigating the consequences of a massive loss of somatosensory inputs on the perception of spatial orientation. The occurrence of possible compensatory processes for external (i.e., object) orientation perception and self-orientation perception was examined by manipulating visual and/or vestibular cues. To that aim, we compared perceptual responses of a deafferented patient (GL) with respect to age-matched Controls in two tasks involving gravity-related judgments. In the first task, subjects had to align a visual rod with the gravitational vertical (i.e., Subjective Visual Vertical: SVV) when facing a tilted visual frame in a classic Rod-and-Frame Test. In the second task, subjects had to report whether they felt tilted when facing different visuo-postural conditions which consisted in very slow pitch tilts of the body and/or visual surroundings away from vertical. Results showed that, much more than Controls, the deafferented patient was fully dependent on spatial cues issued from the visual frame when judging the SVV. On the other hand, the deafferented patient did not rely at all on visual cues for self-tilt detection. Moreover, the patient never reported any sensation of tilt up to 18 degrees contrary to Controls, hence showing that she did not rely on vestibular (i.e., otoliths) signals for the detection of very slow body tilts either. Overall, this study demonstrates that a massive somatosensory deficit substantially impairs the perception of spatial orientation, and that the use of the remaining sensory inputs available to a deafferented patient differs regarding whether the judgment concerns external vs. self-orientation
    • 

    corecore