143 research outputs found

    Les trajectoires formantiques respectant les lois de la physique contribuent-elles à une meilleure perception de la parole ?

    Get PDF
    International audiencePhysical properties of speech articulators contribute to shape articulatory and formant trajectories. This study aims at evaluating the role of this shaping in speech perception. We conducted perception tests of synthetic stimuli generated with speech production models accounting for different degrees of physical complexity. Our results do not provide any support to the hypothesis that the degree of physical realism in the models influences the perception of naturalness. However for degraded speech (silent center speech), significant differences are observed

    Does physical realism of articulatory modelin improve the perception of synthetic speech?

    Get PDF
    International audienceThis article presents the first step in a process to evaluate the potential impact of the physical properties of the articulators on speech perception. The underlying hypothesis is that articulatory biomechanics contribute to shape articulatory and formant trajectories. These in turn become patterns available for speech perception. Perceptual tests on synthetic silent center stimuli were run which are inspired from former studies of Strange and colleagues. Stimuli were generated with a unique timing with different models incorporating various degrees of physical realism. Our results show that silent center stimuli generated with a realistic biomechanical model achieve higher identification scores than stimuli from less realistic, kinematic, models, when only fast reaction times of the listeners are considered, in order to make sure that only low-level cognitive processings are involved

    Does visual attention span relate to eye movements during reading and copying?

    Get PDF
    International audienceThis research investigated whether text reading and copying involve visual attention-processing skills. Children in grades 3 and 5 read and copied the same text. We measured eye movements while reading and the number of gaze lifts (GL) during copying. The children were also administered letter report tasks that constitute an estimation of the number of letters that are processed simultaneously. The tasks were designed to assess visual attention span abilities (VA). The results for both grades revealed that the children who reported more letters, i.e., processed more consonants in parallel, produced fewer rightward fixations during text reading suggesting they could process more letters at each fixation. They also copied more letters per gaze lift from the same text. Furthermore, a regression analysis showed that VA span predicted variations in copying independently of the influence of reading skills. The findings support a role of VA span abilities in the early extraction of orthographic information, for both reading and copying tasks

    Speech in the mirror? Neurobiological correlates of self speech perception

    No full text
    International audienceSelf-awareness and self-recognition during action observation may partly result from a functional matching between action and perception systems. This perception-action interaction enhances the integration between sensory inputs and our own sensory-motor knowledge. We present combined EEG and fMRI studies examining the impact of self-knowledge on multisensory integration mechanisms. More precisely, we investigated this impact during auditory, visual and audio-visual speech perception. Our hypothesis was that hearing and/or viewing oneself talk would facilitate the bimodal integration process and activate sensory-motor maps to a greater extent than observing others. In both studies, half of the stimuli presented the participants’ own productions (self condition) and the other half presented an unknown speaker (other condition). For the “self” condition, we recorded videos of each participant producing/pa/, /ta/ and /ka/ syllables. In the “other” condition, we recorded videos of a speaker the participants had never met producing the same syllables. These recordings were then presented in different modalities: auditory only (A), visual only (V), audio-visual (AV) and incongruent audiovisual (AVi – incongruency referred to different speakers for the audio and video components). In the EEG experiment, 18 participants had to categorize the syllables. In the fMRI experiment, 12 participants had listen to and/or view passively the syllables.In the EEG session, audiovisual interactions were estimated by comparing auditory N1/P2 ERPs during bimodal responses (AV) with the sum of the responses in A and V only conditions (A+V). The amplitude of P2 ERPs was lower for AV than A+V. Importantly, latencies for N1 ERPs were shorter for the “Visual-self” condition than the “Visual-other”, regardless of signal type. In the fMRI session, the presentation modality had an impact on brain activation: activation was stronger for audio or audiovisual stimuli in the superior temporal auditory regions (A= AV=AVi> V), and for video or audiovisual stimuli in MT/V5 and in the premotor cortices (V=AV=AVi> A). In addition, brain activity was stronger in the “self” than the “other” condition both at the left posterior inferior frontal gyrus and cerebellum (lobules I-IV). In line with previous studies on multimodal speech perception, our results point to the existence of integration mechanisms of auditory and visual speech signals. Critically, they further demonstrate a processing advantage when the perceptual situation involves our own speech production. In addition, hearing and/or viewing oneself talk increased activation in the left posterior IFG and cerebellum. These regions are generally responsible for predicting sensory outcomes of action generation. Altogether, these results suggest that viewing our own utterances leads to a temporal facilitation of auditory and visual speech integration. Moreover, processing afferent and efferent signals in sensory-motor areas leads to self -awareness during speech perception

    Speech in the mirror? Neurobiological correlates of self speech perception

    No full text
    International audienceSelf-awareness and self-recognition during action observation may partly result from a functional matching between action and perception systems. This perception-action interaction enhances the integration between sensory inputs and our own sensory-motor knowledge. We present combined EEG and fMRI studies examining the impact of self-knowledge on multisensory integration mechanisms. More precisely, we investigated this impact during auditory, visual and audio-visual speech perception. Our hypothesis was that hearing and/or viewing oneself talk would facilitate the bimodal integration process and activate sensory-motor maps to a greater extent than observing others. In both studies, half of the stimuli presented the participants’ own productions (self condition) and the other half presented an unknown speaker (other condition). For the “self” condition, we recorded videos of each participant producing/pa/, /ta/ and /ka/ syllables. In the “other” condition, we recorded videos of a speaker the participants had never met producing the same syllables. These recordings were then presented in different modalities: auditory only (A), visual only (V), audio-visual (AV) and incongruent audiovisual (AVi – incongruency referred to different speakers for the audio and video components). In the EEG experiment, 18 participants had to categorize the syllables. In the fMRI experiment, 12 participants had listen to and/or view passively the syllables.In the EEG session, audiovisual interactions were estimated by comparing auditory N1/P2 ERPs during bimodal responses (AV) with the sum of the responses in A and V only conditions (A+V). The amplitude of P2 ERPs was lower for AV than A+V. Importantly, latencies for N1 ERPs were shorter for the “Visual-self” condition than the “Visual-other”, regardless of signal type. In the fMRI session, the presentation modality had an impact on brain activation: activation was stronger for audio or audiovisual stimuli in the superior temporal auditory regions (A= AV=AVi> V), and for video or audiovisual stimuli in MT/V5 and in the premotor cortices (V=AV=AVi> A). In addition, brain activity was stronger in the “self” than the “other” condition both at the left posterior inferior frontal gyrus and cerebellum (lobules I-IV). In line with previous studies on multimodal speech perception, our results point to the existence of integration mechanisms of auditory and visual speech signals. Critically, they further demonstrate a processing advantage when the perceptual situation involves our own speech production. In addition, hearing and/or viewing oneself talk increased activation in the left posterior IFG and cerebellum. These regions are generally responsible for predicting sensory outcomes of action generation. Altogether, these results suggest that viewing our own utterances leads to a temporal facilitation of auditory and visual speech integration. Moreover, processing afferent and efferent signals in sensory-motor areas leads to self -awareness during speech perception

    Inhibition of cyclo-oxygenase 2 reduces tumor metastasis and inflammatory signaling during blockade of vascular endothelial growth factor

    Get PDF
    Vascular endothelial growth factor (VEGF) blockade is an effective therapy for human cancer, yet virtually all neoplasms resume primary tumor growth or metastasize during therapy. Mechanisms of progression have been proposed to include genes that control vascular remodeling and are elicited by hypoperfusion, such as the inducible enzyme cyclooxygenase-2 (COX-2). We have previously shown that COX-2 inhibition by the celecoxib analog SC236 attenuates perivascular stromal cell recruitment and tumor growth. We therefore examined the effect of combined SC236 and VEGF blockade, using the metastasizing orthotopic SKNEP1 model of pediatric cancer. Combined treatment perturbed tumor vessel remodeling and macrophage recruitment, but did not further limit primary tumor growth as compared to VEGF blockade alone. However, combining SC236 and VEGF inhibition significantly reduced the incidence of lung metastasis, suggesting a distinct effect on prometastatic mechanisms. We found that SC236 limited tumor cell viability and migration in vitro, with effects enhanced by hypoxia, but did not change tumor proliferation or matrix metalloproteinase expression in vivo. Gene set expression analysis (GSEA) indicated that the addition of SC236 to VEGF inhibition significantly reduced expression of gene sets linked to macrophage mobilization. Perivascular recruitment of macrophages induced by VEGF blockade was disrupted in tumors treated with combined VEGF- and COX-2-inhibition. Collectively, these findings suggest that during VEGF blockade COX-2 may restrict metastasis by limiting both prometastatic behaviors in individual tumor cells and mobilization of macrophages to the tumor vasculature

    Notch and VEGF pathways play distinct but complementary roles in tumor angiogenesis

    Get PDF
    Background: Anti-angiogenesis is a validated strategy to treat cancer, with efficacy in controlling both primary tumor growth and metastasis. The role of the Notch family of proteins in tumor angiogenesis is still emerging, but recent data suggest that Notch signaling may function in the physiologic response to loss of VEGF signaling, and thus participate in tumor adaptation to VEGF inhibitors. Methods: We asked whether combining Notch and VEGF blockade would enhance suppression of tumor angiogenesis and growth, using the NGP neuroblastoma model. NGP tumors were engineered to express a Notch1 decoy construct, which restricts Notch signaling, and then treated with either the anti-VEGF antibody bevacizumab or vehicle. Results: Combining Notch and VEGF blockade led to blood vessel regression, increasing endothelial cell apoptosis and disrupting pericyte coverage of endothelial cells. Combined Notch and VEGF blockade did not affect tumor weight, but did additively reduce tumor viability. Conclusions: Our results indicate that Notch and VEGF pathways play distinct but complementary roles in tumor angiogenesis, and show that concurrent blockade disrupts primary tumor vasculature and viability further than inhibition of either pathway alone

    Perfusion-guided sonopermeation of neuroblastoma: a novel strategy for monitoring and predicting liposomal doxorubicin uptake

    Get PDF
    Neuroblastoma (NB) is the most common extracranial solid tumor in infants and children, and imposes significant morbidity and mortality in this population. The aggressive chemoradiotherapy required to treat high-risk NB results in survival of less than 50%, yet is associated with significant long-term adverse effects in survivors. Boosting efficacy and reducing morbidity are therefore key goals of treatment for affected children. We hypothesize that these may be achieved by developing strategies that both focus and limit toxic therapies to the region of the tumor. One such strategy is the use of targeted image-guided drug delivery (IGDD), which is growing in popularity in personalized therapy to simultaneously improve on-target drug deposition and assess drug pharmacodynamics in individual patients. IGDD strategies can utilize a variety of imaging modalities and methods of actively targeting pharmaceutical drugs, however in vivo imaging in combination with focused ultrasound is one of the most promising approaches already being deployed for clinical applications. Over the last two decades, IGDD using focused ultrasound with microbubble ultrasound contrast agents (UCAs) has been increasingly explored as a method of targeting a wide variety of diseases, including cancer. This technique, known as sonopermeation, mechanically augments vascular permeability, enabling increased penetration of drugs into target tissue. However, to date, methods of monitoring the vascular bioeffects of sonopermeation in vivo are lacking. UCAs are excellent vascular probes in contrast-enhanced ultrasound (CEUS) imaging, and are thus uniquely suited for monitoring the effects of sonopermeation in tumors. Methods: To monitor the therapeutic efficacy of sonopermeation in vivo, we developed a novel system using 2D and 3D quantitative contrast-enhanced ultrasound imaging (qCEUS). 3D tumor volume and contrast enhancement was used to evaluate changes in blood volume during sonopermeation. 2D qCEUS-derived time-intensity curves (TICs) were used to assess reperfusion rates following sonopermeation therapy. Intratumoral doxorubicin (and liposome) uptake in NB was evalauted ex vivo along with associated vascular changes. Results: In this study, we demonstrate that combining focused ultrasound therapy with UCAs can significantly enhance chemotherapeutic payload to NB in an orthotopic xenograft model, by improving delivery and tumoral uptake of long-circulating liposomal doxorubicin (L-DOX) nanoparticles. qCEUS imaging suggests that changes in flow rates are highly sensitive to sonopermeation and could be used to monitor the efficacy of treatment in vivo. Additionally, initial tumor perfusion may be a good predictor of drug uptake during sonopermeation. Following sonopermeation treatment, vascular biomarkers show increased permeability due to reduced pericyte coverage and rapid onset of doxorubicin-induced apoptosis of NB cells but without damage to blood vessels. Conclusion: Our results suggest that significant L-DOX uptake can occur by increasing tumor vascular permeability with microbubble sonopermeation without otherwise damaging the vasculature, as confirmed by in vivo qCEUS imaging and ex vivo analysis. The use of qCEUS imaging to monitor sonopermeation efficiency and predict drug uptake could potentially provide real-time feedback to clinicians for determining treatment efficacy in tumors, leading to better and more efficient personalized therapies. Finally, we demonstrate how the IGDD strategy outlined in this study could be implemented in human patients using a single case study
    corecore