14,462 research outputs found

    A neural marker for social bias toward in-group accents

    Get PDF
    Accents provide information about the speaker's geographical, socio-economic, and ethnic background. Research in applied psychology and sociolinguistics suggests that we generally prefer our own accent to other varieties of our native language and attribute more positive traits to it. Despite the widespread influence of accents on social interactions, educational and work settings the neural underpinnings of this social bias toward our own accent and, what may drive this bias, are unexplored. We measured brain activity while participants from two different geographical backgrounds listened passively to 3 English accent types embedded in an adaptation design. Cerebral activity in several regions, including bilateral amygdalae, revealed a significant interaction between the participants' own accent and the accent they listened to: while repetition of own accents elicited an enhanced neural response, repetition of the other group's accent resulted in reduced responses classically associated with adaptation. Our findings suggest that increased social relevance of, or greater emotional sensitivity to in-group accents, may underlie the own-accent bias. Our results provide a neural marker for the bias associated with accents, and show, for the first time, that the neural response to speech is partly shaped by the geographical background of the listener

    Cross-Linguistic Influence in the Bilingual Mental Lexicon: Evidence of Cognate Effects in the Phonetic Production and Processing of a Vowel Contrast.

    Get PDF
    The present study examines cognate effects in the phonetic production and processing of the Catalan back mid-vowel contrast (/o/-/ɔ/) by 24 early and highly proficient Spanish-Catalan bilinguals in Majorca (Spain). Participants completed a picture-naming task and a forced-choice lexical decision task in which they were presented with either words (e.g., /bɔsk/ "forest") or non-words based on real words, but with the alternate mid-vowel pair in stressed position ((*)/bosk/). The same cognate and non-cognate lexical items were included in the production and lexical decision experiments. The results indicate that even though these early bilinguals maintained the back mid-vowel contrast in their productions, they had great difficulties identifying non-words and real words based on the identity of the Catalan mid-vowel. The analyses revealed language dominance and cognate effects: Spanish-dominants exhibited higher error rates than Catalan-dominants, and production and lexical decision accuracy were also affected by cognate status. The present study contributes to the discussion of the organization of early bilinguals' dominant and non-dominant sound systems, and proposes that exemplar theoretic approaches can be extended to include bilingual lexical connections that account for the interactions between the phonetic and lexical levels of early bilingual individuals

    Brain Dynamics across levels of Organization

    Get PDF
    After presenting evidence that the electrical activity recorded from the brain surface can reflect metastable state transitions of neuronal configurations at the mesoscopic level, I will suggest that their patterns may correspond to the distinctive spatio-temporal activity in the Dynamic Core (DC) and the Global Neuronal Workspace (GNW), respectively, in the models of the Edelman group on the one hand, and of Dehaene-Changeux, on the other. In both cases, the recursively reentrant activity flow in intra-cortical and cortical-subcortical neuron loops plays an essential and distinct role. Reasons will be given for viewing the temporal characteristics of this activity flow as signature of Self-Organized Criticality (SOC), notably in reference to the dynamics of neuronal avalanches. This point of view enables the use of statistical Physics approaches for exploring phase transitions, scaling and universality properties of DC and GNW, with relevance to the macroscopic electrical activity in EEG and EMG

    Categorization of regional and foreign accent in 5- to 7-year-old British children

    Get PDF
    This study examines children's ability to detect accent-related information in connected speech. British English children aged 5 and 7 years old were asked to discriminate between their home accent from an Irish accent or a French accent in a sentence categorization task. Using a preliminary accent rating task with adult listeners, it was first verified that the level of accentedness was similar across the two unfamiliar accents. Results showed that whereas the younger children group behaved just above chance level in this task, the 7-year-old group could reliably distinguish between these variations of their own language, but were significantly better at detecting the foreign accent than the regional accent. These results extend and replicate a previous study (Girard, Floccia, & Goslin, 2008) in which it was found that 5-year-old French children could detect a foreign accent better than a regional accent. The factors underlying the relative lack of awareness for a regional accent as opposed to a foreign accent in childhood are discussed, especially the amount of exposure, the learnability of both types of accents, and a possible difference in the amount of vowels versus consonants variability, for which acoustic measures of vowel formants and plosives voice onset time are provided. © 2009 The International Society for the Study of Behavioural Development

    Semantic form as interface

    Get PDF
    The term interface had a remarkable career over the past several decades, motivated largely by its use in computer science. Although the concept of a "surface common to two areas" (Oxford Advanced Learner's Dictionary, 1980) is intuitively clear enough, the range of its application is not very sharp and well defined, a "common surface" is open to a wide range of interpretations

    Judgements of a speaker’s personality are correlated across differing content and stimulus type

    Get PDF
    It has previously been shown that first impressions of a speaker’s personality, whether accurate or not, can be judged from short utterances of vowels and greetings, as well as from prolonged sentences and readings of complex paragraphs. From these studies, it is established that listeners’ judgements are highly consistent with one another, suggesting that different people judge personality traits in a similar fashion, with three key personality traits being related to measures of valence (associated with trustworthiness), dominance, and attractiveness. Yet, particularly in voice perception, limited research has established the reliability of such personality judgements across stimulus types of varying lengths. Here we investigate whether first impressions of trustworthiness, dominance, and attractiveness of novel speakers are related when a judgement is made on hearing both one word and one sentence from the same speaker. Secondly, we test whether what is said, thus adjusting content, influences the stability of personality ratings. 60 Scottish voices (30 females) were recorded reading two texts: one of ambiguous content and one with socially-relevant content. One word (~500 ms) and one sentence (~3000 ms) were extracted from each recording for each speaker. 181 participants (138 females) rated either male or female voices across both content conditions (ambiguous, socially-relevant) and both stimulus types (word, sentence) for one of the three personality traits (trustworthiness, dominance, attractiveness). Pearson correlations showed personality ratings between words and sentences were strongly correlated, with no significant influence of content. In short, when establishing an impression of a novel speaker, judgments of three key personality traits are highly related whether you hear one word or one sentence, irrespective of what they are saying. This finding is consistent with initial personality judgments serving as elucidators of approach or avoidance behaviour, without modulation by time or content. All data and sounds are available on OSF (osf.io/s3cxy)

    Access to recorded interviews: A research agenda

    Get PDF
    Recorded interviews form a rich basis for scholarly inquiry. Examples include oral histories, community memory projects, and interviews conducted for broadcast media. Emerging technologies offer the potential to radically transform the way in which recorded interviews are made accessible, but this vision will demand substantial investments from a broad range of research communities. This article reviews the present state of practice for making recorded interviews available and the state-of-the-art for key component technologies. A large number of important research issues are identified, and from that set of issues, a coherent research agenda is proposed

    Employing Emotion Cues to Verify Speakers in Emotional Talking Environments

    Full text link
    Usually, people talk neutrally in environments where there are no abnormal talking conditions such as stress and emotion. Other emotional conditions that might affect people talking tone like happiness, anger, and sadness. Such emotions are directly affected by the patient health status. In neutral talking environments, speakers can be easily verified, however, in emotional talking environments, speakers cannot be easily verified as in neutral talking ones. Consequently, speaker verification systems do not perform well in emotional talking environments as they do in neutral talking environments. In this work, a two-stage approach has been employed and evaluated to improve speaker verification performance in emotional talking environments. This approach employs speaker emotion cues (text-independent and emotion-dependent speaker verification problem) based on both Hidden Markov Models (HMMs) and Suprasegmental Hidden Markov Models (SPHMMs) as classifiers. The approach is comprised of two cascaded stages that combines and integrates emotion recognizer and speaker recognizer into one recognizer. The architecture has been tested on two different and separate emotional speech databases: our collected database and Emotional Prosody Speech and Transcripts database. The results of this work show that the proposed approach gives promising results with a significant improvement over previous studies and other approaches such as emotion-independent speaker verification approach and emotion-dependent speaker verification approach based completely on HMMs.Comment: Journal of Intelligent Systems, Special Issue on Intelligent Healthcare Systems, De Gruyter, 201
    corecore