1,112 research outputs found

    Interpretation of some Yb-based valence-fluctuating crystals as approximants to a dodecagonal quasicrystal

    Full text link
    The hexagonal ZrNiAl-type (space group: P-62m) and the tetragonal Mo2FeB2-type (space group: P4/mbm) structures, which are frequently formed in the same Yb-based alloys and exhibit physical properties related to valence-fluctuation, can be regarded as approximants of a hypothetical dodecagonal quasicrystal. Using Pd-Sn-Yb system as an example, a model of quasicrystal structure has been constructed, of which 5-dimensional crystal (space group: P12/mmm, aDD=5.66 {\AA} and c=3.72 {\AA}) consists of four types of acceptance regions located at the following crystallographic sites; Yb [00000], Pd[1/3 0 1/3 0 1/2], Pd[1/3 1/3 1/3 1/3 0] and Sn[1/2 00 1/2 1/2]. In the 3-dimensional space, the quasicrystal is composed of three types of columns, of which c-projections correspond to a square, an equilateral triangle and a 3-fold hexagon. They are fragments of two known crystals, the hexagonal {\alpha}-YbPdSn and the tetragonal Yb2Pd2Sn structures. The model of the hypothetical quasicrystal may be applicable as a platform to treat in a unified manner the heavy fermion properties in the two types of Yb-based crystals.Comment: 19 pages, 6 figure

    Age differences in the motor control of speech : an fMRI study of healthy aging

    Get PDF
    Healthy aging is associated with a decline in cognitive, executive, and motor processes that are concomitant with changes in brain activation patterns, particularly at high complexity levels. While speech production relies on all these processes, and is known to decline with age, the mechanisms that underlie these changes remain poorly understood, despite the importance of communication on everyday life. In this cross‐sectional group study, we investigated age differences in the neuromotor control of speech production by combining behavioral and functional magnetic resonance imaging (fMRI) data. Twenty‐seven healthy adults underwent fMRI while performing a speech production task consisting in the articulation of nonwords of different sequential and motor complexity. Results demonstrate strong age differences in movement time (MT), with longer and more variable MT in older adults. The fMRI results revealed extensive age differences in the relationship between BOLD signal and MT, within and outside the sensorimotor system. Moreover, age differences were also found in relation to sequential complexity within the motor and attentional systems, reflecting both compensatory and de‐differentiation mechanisms. At very high complexity level (high motor complexity and high sequence complexity), age differences were found in both MT data and BOLD response, which increased in several sensorimotor and executive control areas. Together, these results suggest that aging of motor and executive control mechanisms may contribute to age differences in speech production. These findings highlight the importance of studying functionally relevant behavior such as speech to understand the mechanisms of human brain aging

    Interactions audio-tactiles et perception de la parole : Comparaisons entre sujets aveugles et voyants

    Get PDF
    International audienceThe present study investigated whether manual tactile information from a speaker's face modulates the decoding of speech when audio-tactile perception is compared with audio-only perception. Two groups of congenitally blind and sighted adults were compared. Participants performed a syllable decision task across three conditions: audio-only and congruent/incongruent audio-tactile conditions. For the auditory modality, the syllables were presented in a background white noise or without noise. Our results demonstrate that manual tactile information relevant to recovering speech gestures modulates auditory speech perception in case of degraded acoustic information and that these audio-tactile interactions occur similarly in untrained listeners despite differences in sensory skillsLe but de cette expérience était d'évaluer si l'information tactile, obtenu manuellement par le contact avec le visage du locuteur peut moduler le décodage de la parole. La perception audio-tactile de la parole a été comparé à la seule perception auditive chez des sujets aveugles et des sujets voyants. Les participants devaient réaliser une tâche de décision phonémique dans trois conditions : auditive, audio-tactile cohérente et audio-tactile non cohérente, avec ou sans bruit masquant. Les résultats montrent qu'une information tactile sur les gestes de la parole améliore la perception de la parole en cas d'information acoustique dégradée. Malgré de possibles différences de sensibilité sensorielle entre ces deux groupes de sujets, le même type d'interaction audio-tactile a été constaté pour les auditeurs voyants et aveugles

    Sensory-motor interactions in speech perception, production and imitation: behavioral evidence from close shadowing, perceptuo-motor phonemic organization and imitative changes.

    No full text
    International audienceSpeech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. In the present study, we combined three classical experimental paradigms to further test perceptuomotor interactions in both speech perception and production. In a first close shadowing experiment, auditory and audiovisual syllable identification led to faster oral than manual responses. In a second experiment, participants were asked to produce and to listen to French vowels, varying from height feature, in order to test perceptuo-motor phonemic organization and idiosyncrasies. In a third experiment, online imitative changes on the fundamental frequency in relation to acoustic vowel targets were observed in a non-interactive situation of communication during both unintentional and voluntary imitative production tasks. Altogether our results appear exquisitely in line with a functional coupling between action and perception speech systems and provide further evidence for a sensory-motor nature of speech representations

    Sensory-motor interactions in speech perception, production and imitation: behavioral evidence from close shadowing, perceptuo-motor phonemic organization and imitative changes.

    No full text
    International audienceSpeech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. In the present study, we combined three classical experimental paradigms to further test perceptuomotor interactions in both speech perception and production. In a first close shadowing experiment, auditory and audiovisual syllable identification led to faster oral than manual responses. In a second experiment, participants were asked to produce and to listen to French vowels, varying from height feature, in order to test perceptuo-motor phonemic organization and idiosyncrasies. In a third experiment, online imitative changes on the fundamental frequency in relation to acoustic vowel targets were observed in a non-interactive situation of communication during both unintentional and voluntary imitative production tasks. Altogether our results appear exquisitely in line with a functional coupling between action and perception speech systems and provide further evidence for a sensory-motor nature of speech representations

    The shadow of a doubt? Evidence for perceptuo-motor linkage during auditory and audiovisual close-shadowing

    Get PDF
    One classical argument in favor of a functional role of the motor system in speech perception comes from the close shadowing task in which a subject has to identify and to repeat as quickly as possible an auditory speech stimulus. The fact that close shadowing can occur very rapidly and much faster than manual identification of the speech target is taken to suggest that perceptually-induced speech representations are already shaped in a motor-compatible format. Another argument is provided by audiovisual interactions often interpreted as referring to a multisensory-motor framework. In this study, we attempted to combine these two paradigms by testing whether the visual modality could speed motor response in a close-shadowing task. To this aim, both oral and manual responses were evaluated during the perception of auditory and audio-visual speech stimuli, clear or embedded in white noise. Overall, oral responses were faster than manual ones, but it also appeared that they were less accurate in noise, which suggests that motor representations evoked by the speech input could be rough at a first processing stage. In the presence of acoustic noise, the audiovisual modality led to both faster and more accurate responses than the auditory modality. No interaction was however observed between modality and response. Altogether, these results are interpreted within a two-stage sensory-motor framework, in which the auditory and visual streams are integrated together and with internally generated motor representations before a final decision may be available

    Interaction between articulatory gestures and inner speech in a counting task

    No full text
    International audienceInteraction between covert and overt orofacial gestures has been poorly studied apart from old and rather qualitative experiments. The question deserves special interest in the context of the debate between auditory and motor theories of speech perception, where dual tasks may be of great interest. It is shown here that dynamic mandible and lips movement produced by a participant result in strong and stable perturbations to an inner speech counting task that has to be realized at the same time, while static orofacial configurations and static or dynamic manual actions produce no perturbation. This enables the authors to discuss how such kinds of orofacial perturbations could be introduced in dual task paradigms to assess the role of motor processes in speech perception

    Streaming Target-Speaker ASR with Neural Transducer

    Full text link
    Although recent advances in deep learning technology have boosted automatic speech recognition (ASR) performance in the single-talker case, it remains difficult to recognize multi-talker speech in which many voices overlap. One conventional approach to tackle this problem is to use a cascade of a speech separation or target speech extraction front-end with an ASR back-end. However, the extra computation costs of the front-end module are a critical barrier to quick response, especially for streaming ASR. In this paper, we propose a target-speaker ASR (TS-ASR) system that implicitly integrates the target speech extraction functionality within a streaming end-to-end (E2E) ASR system, i.e. recurrent neural network-transducer (RNNT). Our system uses a similar idea as adopted for target speech extraction, but implements it directly at the level of the encoder of RNNT. This allows TS-ASR to be realized without placing extra computation costs on the front-end. Note that this study presents two major differences between prior studies on E2E TS-ASR; we investigate streaming models and base our study on Conformer models, whereas prior studies used RNN-based systems and considered only offline processing. We confirm in experiments that our TS-ASR achieves comparable recognition performance with conventional cascade systems in the offline setting, while reducing computation costs and realizing streaming TS-ASR.Comment: Accepted to Interspeech 202
    corecore