40,697 research outputs found

    Hormones and temporal components of speech: sex differences and effects of menstrual cyclicity on speech

    Get PDF
    Voice onset time (VOT) is a salient acoustic parameter of speech which signals the ā€œvoicedā€ and ā€œvoicelessā€ status of plosives in English (e.g. the initial sound in ā€˜batā€™ vs. the initial sound in ā€˜patā€™). As a micro-temporal acoustic parameter, VOT may be sensitive to changes in hormones which may affect the neuromuscular systems involved in speech production. This study adopted a novel approach by investigating the effects of menstrual cycle phase and sex on VOT. VOT data representing the 6 plosives of English (/p b t d k g/) were examined for 7 women (age 20-23 years) at two phases of the menstrual cycle (day 18-25: High Estrogen and Progesterone; day 2-5: Low Estrogen and Progesterone). Results indicated that menstrual cycle phase had a significant interaction with the identity of the plosive (F (5,30) = 5.869, P .05), or the contrast between voiced and voiceless cognates (F (1,10) = .407, P > .05). In contrast, the high hormone phase VOT samples displayed significant plosive by sex interactions (F (5,50) = 4.442, P < .005). In addition, significant sex differences were found for the contrasts between cognate voiced and voiceless plosives (F (1,10) = 5.019, P < .05); the women displayed a more marked voiced/voiceless contrast. The findings suggest that ovarian hormones play some role in shaping some temporal components of speech

    The weight of phonetic substance in the structure of sound inventories

    Get PDF
    In the research field initiated by Lindblom & Liljencrants in 1972, we illustrate the possibility of giving substance to phonology, predicting the structure of phonological systems with nonphonological principles, be they listener-oriented (perceptual contrast and stability) or speaker-oriented (articulatory contrast and economy). We proposed for vowel systems the Dispersion-Focalisation Theory (Schwartz et al., 1997b). With the DFT, we can predict vowel systems using two competing perceptual constraints weighted with two parameters, respectively Ī» and Ī±. The first one aims at increasing auditory distances between vowel spectra (dispersion), the second one aims at increasing the perceptual salience of each spectrum through formant proximities (focalisation). We also introduced new variants based on research in physics - namely, phase space (Ī»,Ī±) and polymorphism of a given phase, or superstructures in phonological organisations (VallĆ©e et al., 1999) which allow us to generate 85.6% of 342 UPSID systems from 3- to 7-vowel qualities. No similar theory for consonants seems to exist yet. Therefore we present in detail a typology of consonants, and then suggest ways to explain plosive vs. fricative and voiceless vs. voiced consonants predominances by i) comparing them with language acquisition data at the babbling stage and looking at the capacity to acquire relatively different linguistic systems in relation with the main degrees of freedom of the articulators; ii) showing that the places ā€œpreferredā€ for each manner are at least partly conditioned by the morphological constraints that facilitate or complicate, make possible or impossible the needed articulatory gestures, e.g. the complexity of the articulatory control for voicing and the aerodynamics of fricatives. A rather strict coordination between the glottis and the oral constriction is needed to produce acceptable voiced fricatives (Mawass et al., 2000). We determine that the region where the combinations of Ag (glottal area) and Ac (constriction area) values results in a balance between the voice and noise components is indeed very narrow. We thus demonstrate that some of the main tendencies in the phonological vowel and consonant structures of the worldā€™s languages can be explained partly by sensorimotor constraints, and argue that actually phonology can take part in a theory of Perception-for-Action-Control

    Towards an Integrative Information Society: Studies on Individuality in Speech and Sign

    Get PDF
    The flow of information within modern information society has increased rapidly over the last decade. The major part of this information flow relies on the individualā€™s abilities to handle text or speech input. For the majority of us it presents no problems, but there are some individuals who would benefit from other means of conveying information, e.g. signed information flow. During the last decades the new results from various disciplines have all suggested towards the common background and processing for sign and speech and this was one of the key issues that I wanted to investigate further in this thesis. The basis of this thesis is firmly within speech research and that is why I wanted to design analogous test batteries for widely used speech perception tests for signers ā€“ to find out whether the results for signers would be the same as in speakersā€™ perception tests. One of the key findings within biology ā€“ and more precisely its effects on speech and communication research ā€“ is the mirror neuron system. That finding has enabled us to form new theories about evolution of communication, and it all seems to converge on the hypothesis that all communication has a common core within humans. In this thesis speech and sign are discussed as equal and analogical counterparts of communication and all research methods used in speech are modified for sign. Both speech and sign are thus investigated using similar test batteries. Furthermore, both production and perception of speech and sign are studied separately. An additional framework for studying production is given by gesture research using cry sounds. Results of cry sound research are then compared to results from children acquiring sign language. These results show that individuality manifests itself from very early on in human development. Articulation in adults, both in speech and sign, is studied from two perspectives: normal production and re-learning production when the apparatus has been changed. Normal production is studied both in speech and sign and the effects of changed articulation are studied with regards to speech. Both these studies are done by using carrier sentences. Furthermore, sign production is studied giving the informants possibility for spontaneous speech. The production data from the signing informants is also used as the basis for input in the sign synthesis stimuli used in sign perception test battery. Speech and sign perception were studied using the informantsā€™ answers to questions using forced choice in identification and discrimination tasks. These answers were then compared across language modalities. Three different informant groups participated in the sign perception tests: native signers, sign language interpreters and Finnish adults with no knowledge of any signed language. This gave a chance to investigate which of the characteristics found in the results were due to the language per se and which were due to the changes in modality itself. As the analogous test batteries yielded similar results over different informant groups, some common threads of results could be observed. Starting from very early on in acquiring speech and sign the results were highly individual. However, the results were the same within one individual when the same test was repeated. This individuality of results represented along same patterns across different language modalities and - in some occasions - across language groups. As both modalities yield similar answers to analogous study questions, this has lead us to providing methods for basic input for sign language applications, i.e. signing avatars. This has also given us answers to questions on precision of the animation and intelligibility for the users ā€“ what are the parameters that govern intelligibility of synthesised speech or sign and how precise must the animation or synthetic speech be in order for it to be intelligible. The results also give additional support to the well-known fact that intelligibility in fact is not the same as naturalness. In some cases, as shown within the sign perception test battery design, naturalness decreases intelligibility. This also has to be taken into consideration when designing applications. All in all, results from each of the test batteries, be they for signers or speakers, yield strikingly similar patterns, which would indicate yet further support for the common core for all human communication. Thus, we can modify and deepen the phonetic framework models for human communication based on the knowledge obtained from the results of the test batteries within this thesis.Siirretty Doriast

    Facial locations in ASL based on production and perception data

    Get PDF
    This study tests the phonological distinctiveness of eight facial locations in ASL, with both production and perception experiments. This kind of work is crucial because, due to scarceness of minimal pairs in sign languages, phonemic locations are difficult to determine. Moreover, claims made but not tested by previous theoretic models (Stokoe et al. 1960, Stokoe 1965, Battison et al. 1975, Battison 1978, Friedman 1977, Kegl & Wilbur 1976, Wilbur 1979, Sandlerā€™s 1989, Brentariā€™s 1998) are here investigated, including whether locations that are predicted to be contrastive are indeed distinct (e.g. ā€˜chinā€™ vs ā€˜mouthā€™). The specific goal of the first experiment (elicited production) is to determine what are the places of articulation, the aim of the second experiment (perception) is to determine if these places are contrastive

    Empirical approaches for investigating the origins of structure in speech

    Get PDF
    Ā© John Benjamins Publishing Company. In language evolution research, the use of computational and experimental methods to investigate the emergence of structure in language is exploding. In this review, we look exclusively at work exploring the emergence of structure in speech, on both a categorical level (what drives the emergence of an inventory of individual speech sounds), and a combinatorial level (how these individual speech sounds emerge and are reused as part of larger structures). We show that computational and experimental methods for investigating population-level processes can be effectively used to explore and measure the effects of learning, communication and transmission on the emergence of structure in speech. We also look at work on child language acquisition as a tool for generating and validating hypotheses for the emergence of speech categories. Further, we review the effects of noise, iconicity and production effects

    EEG analysis based on dynamic visual stimuli: best practices in analysis of sign language data

    Get PDF
    This paper reviews best practices for experimental design and analysis for sign language research using neurophysiological methods, such as electroencephalography (EEG) and other methods with high temporal resolution, as well as identifies methodological challenges in neurophysiological research on natural sign language processing. In particular, we outline the considerations for generating linguistically and physically well-controlled stimuli accounting for 1) the layering of manual and non-manual information at different timescales, 2) possible unknown linguistic and non-linguistic visual cues that can affect processing, 3) variability across linguistic stimuli, and 4) predictive processing. Two specific concerns with regard to the analysis and interpretation of observed event related potential (ERP) effects for dynamic stimuli are discussed in detail. First, we discuss the ā€œtrigger/effect assignment problemā€, which describes the difficulty of determining the time point for calculating ERPs. This issue is related to the problem of determining the onset of a critical sign (i.e., stimulus onset time), and the lack of clarity as to how the border between lexical (sign) and transitional movement (motion trajectory between individual signs) should be defined. Second, we discuss possible differences in the dynamics within signing that might influence ERP patterns and should be controlled for when creating natural sign language material for ERP studies. In addition, we outline alternative approaches to EEG data analyses for natural signing stimuli, such as the timestamping of continuous EEG with trigger markers for each potentially relevant cue in dynamic stimuli. Throughout the discussion, we present empirical evidence for the need to account for dynamic, multi-channel, and multi-timescale visual signal that characterizes sign languages in order to ensure the ecological validity of neurophysiological research in sign languages

    The listening talker: A review of human and algorithmic context-induced modifications of speech

    Get PDF
    International audienceSpeech output technology is finding widespread application, including in scenarios where intelligibility might be compromised - at least for some listeners - by adverse conditions. Unlike most current algorithms, talkers continually adapt their speech patterns as a response to the immediate context of spoken communication, where the type of interlocutor and the environment are the dominant situational factors influencing speech production. Observations of talker behaviour can motivate the design of more robust speech output algorithms. Starting with a listener-oriented categorisation of possible goals for speech modification, this review article summarises the extensive set of behavioural findings related to human speech modification, identifies which factors appear to be beneficial, and goes on to examine previous computational attempts to improve intelligibility in noise. The review concludes by tabulating 46 speech modifications, many of which have yet to be perceptually or algorithmically evaluated. Consequently, the review provides a roadmap for future work in improving the robustness of speech output

    Knowledge and Attitudes of Jordanian Dentists toward Speech Language Pathology

    Get PDF
    This study was conducted to assess dentistsā€™ knowledge of normal speech-language development (NSLD), speech-language disorders (SLD), and speech-language pathology (SLPy) and to determine their general attitudes toward speech-language pathology (SLPy). A self-administered, web-based questionnaire was emailed to all members of the Jordanian Dental Association Council. 191 completed questionnaire were entered in excel sheet and statistically analyzed with IBM SPSS version 20 software. The respondents demonstrated insufficient knowledge regarding normal speech-language development and speech-language disorders. Additionally, the majorĀ¬ity of respondents reported a general impression that the speech-language pathologist has an important role in a health profession team (86.8%). However, they did poorly on the normal speech-language development questions (26%) as well as the speech-language disorders questions (18%). There were no statistically significant differences between different variables - age, gender, years of practice, place of practice and specialty of dentists and dentistsā€™ knowledge of speech-language pathology

    Learning and adaptation from a semiotic perspective

    Get PDF
    This paper discusses the relation between learning and adaptation, arguing that the current state of the art in semiotics suggests a continuity between the two. An overview of the relevant theories in this regard, as considered in semiotics, reveals an embodied and environmental account of learning, where language plays an important but nevertheless limited role. Learning and adaptation are seen as inseparable cases of semiotic modelling. Such a construal of these opens up new pathways towards a nondualist philosophy and theory of education
    • ā€¦
    corecore