15,377 research outputs found

    Articulatory Tradeoffs Reduce Acoustic Variability During American English /r/ Production

    Full text link
    Acoustic and articulatory recordings reveal that speakers utilize systematic articulatory tradeoffs to maintain acoustic stability when producing the phoneme /r/. Distinct articulator configurations used to produce /r/ in various phonetic contexts show systematic tradeoffs between the cross-sectional areas of different vocal tract sections. Analysis of acoustic and articulatory variabilities reveals that these tradeoffs act to reduce acoustic variability, thus allowing large contextual variations in vocal tract shape; these contextual variations in turn apparently reduce the amount of articulatory movement required. These findings contrast with the widely held view that speaking involves a canonical vocal tract shape target for each phoneme.National Institute on Deafness and Other Communication Disorders (1R29-DC02852-02, 5R01-DC01925-04, 1R03-C2576-0l); National Science Foundation (IRI-9310518

    Learning to Produce Speech with an Altered Vocal Tract: The Role of Auditory Feedback

    Get PDF
    Modifying the vocal tract alters a speaker’s previously learned acoustic–articulatory relationship. This study investigated the contribution of auditory feedback to the process of adapting to vocal-tract modifications. Subjects said the word /tɑs/ while wearing a dental prosthesis that extended the length of their maxillary incisor teeth. The prosthesis affected /s/ productions and the subjects were asked to learn to produce ‘‘normal’’ /s/’s. They alternately received normal auditory feedback and noise that masked their natural feedback during productions. Acoustic analysis of the speakers’ /s/ productions showed that the distribution of energy across the spectra moved toward that of normal, unperturbed production with increased experience with the prosthesis. However, the acoustic analysis did not show any significant differences in learning dependent on auditory feedback. By contrast, when naive listeners were asked to rate the quality of the speakers’ utterances, productions made when auditory feedback was available were evaluated to be closer to the subjects’ normal productions than when feedback was masked. The perceptual analysis showed that speakers were able to use auditory information to partially compensate for the vocal-tract modification. Furthermore, utterances produced during the masked conditions also improved over a session, demonstrating that the compensatory articulations were learned and available after auditory feedback was removed

    LeviSense: a platform for the multisensory integration in levitating food and insights into its effect on flavour perception

    Get PDF
    Eating is one of the most multisensory experiences in everyday life. All of our five senses (i.e. taste, smell, vision, hearing and touch) are involved, even if we are not aware of it. However, while multisensory integration has been well studied in psychology, there is not a single platform for testing systematically the effects of different stimuli. This lack of platform results in unresolved design challenges for the design of taste-based immersive experiences. Here, we present LeviSense: the first system designed for multisensory integration in gustatory experiences based on levitated food. Our system enables the systematic exploration of different sensory effects on eating experiences. It also opens up new opportunities for other professionals (e.g., molecular gastronomy chefs) looking for innovative taste-delivery platforms. We describe the design process behind LeviSense and conduct two experiments to test a subset of the crossmodal combinations (i.e., taste and vision, taste and smell). Our results show how different lighting and smell conditions affect the perceived taste intensity, pleasantness, and satisfaction. We discuss how LeviSense creates a new technical, creative, and expressive possibilities in a series of emerging design spaces within Human-Food Interaction

    Synchronization of Sound Sources

    Full text link
    Sound generation and -interaction is highly complex, nonlinear and self-organized. Already 150 years ago Lord Rayleigh raised the following problem: Two nearby organ pipes of different fundamental frequencies sound together almost inaudibly with identical pitch. This effect is now understood qualitatively by modern synchronization theory (M. Abel et al., J. Acoust. Soc. Am., 119(4), 2006). For a detailed, quantitative investigation, we substituted one pipe by an electric speaker. We observe that even minute driving signals force the pipe to synchronization, thus yielding three decades of synchronization -- the largest range ever measured to our knowledge. Furthermore, a mutual silencing of the pipe is found, which can be explained by self-organized oscillations, of use for novel methods of noise abatement. Finally, we develop a specific nonlinear reconstruction method which yields a perfect quantitative match of experiment and theory.Comment: 5 pages, 4 figure

    Auditory communication in domestic dogs: vocal signalling in the extended social environment of a companion animal

    Get PDF
    Domestic dogs produce a range of vocalisations, including barks, growls, and whimpers, which are shared with other canid species. The source–filter model of vocal production can be used as a theoretical and applied framework to explain how and why the acoustic properties of some vocalisations are constrained by physical characteristics of the caller, whereas others are more dynamic, influenced by transient states such as arousal or motivation. This chapter thus reviews how and why particular call types are produced to transmit specific types of information, and how such information may be perceived by receivers. As domestication is thought to have caused a divergence in the vocal behaviour of dogs as compared to the ancestral wolf, evidence of both dog–human and human–dog communication is considered. Overall, it is clear that domestic dogs have the potential to acoustically broadcast a range of information, which is available to conspecific and human receivers. Moreover, dogs are highly attentive to human speech and are able to extract speaker identity, emotional state, and even some types of semantic information

    Roaring high and low: composition and possible functions of the Iberian stag's vocal repertoire

    Get PDF
    We provide a detailed description of the rutting vocalisations of free-ranging male Iberian deer (Cervus elaphus hispanicus, Hilzheimer 1909), a geographically isolated and morphologically differentiated subspecies of red deer Cervus elaphus. We combine spectrographic examinations, spectral analyses and automated classifications to identify different call types, and compare the composition of the vocal repertoire with that of other red deer subspecies. Iberian stags give bouts of roars (and more rarely, short series of barks) that are typically composed of two different types of calls. Long Common Roars are mostly given at the beginning or at the end of the bout, and are characterised by a high fundamental frequency (F0) resulting in poorly defined formant frequencies but a relatively high amplitude. In contrast, Short Common Roars are typically given in the middle or at the end of the bout, and are characterised by a lower F0 resulting in relatively well defined vocal tract resonances, but low amplitude. While we did not identify entirely Harsh Roars (as described in the Scottish red deer subspecies (Cervus elaphus scoticus), a small percentage of Long Common Roars contained segments of deterministic chaos. We suggest that the evolution of two clearly distinct types of Common Roars may reflect divergent selection pressures favouring either vocal efficiency in high pitched roars or the communication of body size in low-pitched, high spectral density roars highlighting vocal tract resonances. The clear divergence of the Iberian red deer vocal repertoire from those of other documented European red deer populations reinforces the status of this geographical variant as a distinct subspecies

    Phonetic drift

    Get PDF
    This chapter provides an overview of research on the phonetic changes that occur in one’s native language (L1) due to recent experience in another language (L2), a phenomenon known as phonetic drift. Through a survey of empirical findings on segmental and suprasegmental acoustic properties, the chapter examines the features of the L1 that are subject to phonetic drift, the cognitive mechanism(s) behind phonetic drift, and the various factors that influence the likelihood of phonetic drift. In short, virtually all aspects of L1 speech are subject to drift, but different aspects do not drift in the same manner, possibly due to multiple routes of L2 influence coexisting at different levels of L1 phonological structure. In addition to the timescale of these changes, the chapter discusses the relationship between phonetic drift and attrition as well as some of the enduring questions in this area.https://drive.google.com/open?id=1eQbh17Z4YsH8vY_XjCHGqi5QChfBKcAZhttps://drive.google.com/open?id=1eQbh17Z4YsH8vY_XjCHGqi5QChfBKcAZhttps://drive.google.com/open?id=1eQbh17Z4YsH8vY_XjCHGqi5QChfBKcAZAccepted manuscriptAccepted manuscrip

    The relation between acoustic and articulatory variation in vowels : data from American and Australian English

    Get PDF
    In studies of dialect variation, the articulatory nature of vowels is sometimes inferred from formant values using the following heuristic: F1 is inversely correlated with tongue height and F2 is inversely correlated with tongue backness. This study compared vowel formants and corresponding lingual articulation in two dialects of English, standard North American English and Australian English. Five speakers of North American English and four speakers of Australian English were recorded producing multiple repetitions of ten monophthongs embedded in the /sVd/ context. Simultaneous articulatory data were collected using electromagnetic articulography. Results show that there are significant correlations between tongue position and formants in the direction predicted by the heuristic but also that the relations implied by the heuristic break down under specific conditions. Articulatory vowel spaces, based on tongue dorsum (TD) position, and acoustic vowel spaces, based on formants, show systematic misalignment due in part to the influence of other articulatory factors, including lip rounding and tongue curvature on formant values. Incorporating these dimensions into our dialect comparison yields a richer description and a more robust understanding of how vowel formant patterns are reproduced within and across dialects
    corecore