12 research outputs found

    The Culture of Student Debate: When not to Argue

    Get PDF
    Today public speaking as a form of communication calls for the development of actives ways of speaker’s interaction with the audience, with the audience taking on an increasingly active and meaningful participation. In the cultural environment of modern higher education, in universities, student debates are becoming one of the most relevant and popular activities. Basing on the assumption that debate is an effective tool for developing both communicative and speech skills of future professionals in any field of activity focused on social interaction, this study aims to investigate the specifics of the organizing and conducting debates to develop mastery, professionalism, and leader potential of students in the framework of the course of Russian as a second foreign language (RSL). With this approach in mind, we set the following objectives: 1) to clarify the main approaches to the meaning of oral communication and the role of debate in it for the formation of basic competencies of students today; 2) to study the structure of typical units of public speaking; 3) to identify problems students face in the process of verbal communication; 4) to substantiate the advantages of using debates for the development of the necessary general cultural, professional and communicative qualities and skills of a future professional. In its methodology, this study was based on the works in the field of the theory of communication, the language of business communication, the culture of speech, pedagogy, and methods of teaching the Russian language to non-native speakers. The research methods included surveys and interviews, as well as content analysis and case study methods. The study has proved that solving the pedagogical problems of fostering responsibility, independence, and proactive social attitude requires the development of general cultural, research and communication skills, and debates have proved to be an effective form of practicing these skills in the educational process. The communicative culture and the overall training of students for their future professional career means leadership development, and for this cause debates are an effective teaching tool

    The verbal, vocal, and gestural expression of (in)dependency in two types of subordinate constructions

    Get PDF
    Based on a video recording of conversational British English, this paper tests within the framework of Multimodal Discourse Analysis whether two different subordinate structures are evenly integrated to their environment. Subordinate constructions have been described in linguistics as dependent forms elaborating on primary elements of discourse. Although their verbal characteristics have been deeply analysed, few studies have focused on the articulation of the different communicative modalities in their production or provided a qualified picture of their integration. The main hypothesis is based on the capacity of subordinate constructions to show distinct forms of autonomy depending on their syntactic type, thus expressing different degrees of break. Beyond showing that subordinate constructions are not evenly dependent on their environment depending on how speakers use the prosodic and kinetic modalities to express greater (in)dependency, the results suggest that the creation of a break mainly relies on prosodic cues. Changes in the modal configuration throughout the sequence suggest modalities are dynamic and flexible resources for integrating or demarcating subordinate constructions in function of their syntactic type

    Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution

    No full text
    Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations

    Read my lips: Speech distortions in musical lyrics can be overcome (slightly) by facial information

    No full text
    Understanding the lyrics of many contemporary songs is difficult, and an earlier study [Hidalgo-Barnes, M., Massaro, D.W., 2007. Read my lips: an animated face helps communicate musical lyrics. Psychomusicology 19, 3–12] showed a benefit for lyrics recognition when seeing a computer-animated talking head (Baldi) mouthing the lyrics along with hearing the singer. However, the contribution of visual information was relatively small compared to what is usually found for speech. In the current experiments, our goal was to determine why the face appears to contribute less when aligned with sung lyrics than when aligned with normal speech presented in noise. The first experiment compared the contribution of the talking head with the originally sung lyrics versus the case when it was aligned with the Festival text-to-speech synthesis (TtS) spoken at the original duration of the song’s lyrics. A small and similar influence of the face was found in both conditions. In the three experiments, we compared the presence of the face when the durations of the TtS were equated with the duration of the original musical lyrics to the case when the lyrics were read with typical TtS durations and this speech embedded in noise. The results indicated that the unusual temporally distorted durations of musical lyrics decreases the contribution of the visible speech from the face

    How Truncating Are ‘Truncating Languages'? Evidence from Russian and German

    Get PDF
    Russian and German have been previously been described as ‘truncating‘, or cutting off target frequencies of the phrase-final pitch trajectories when the time available for voicing is compromised. However, supporting evidence is rare and limited to only a few pitch categories. This paper reports a production study conducted to document pitch adjustments to linguistic materials, in which the amount of voicing available for the realization of a pitch pattern varies from relatively long to extremely short. Productions of nuclear H+L*, H* and L*+H pitch accents followed by a low boundary tone were investigated in the two languages. The results of the study show that speakers of both ‘truncating languages’‘ do not exclusively utilize truncation exclusively when accommodating to different segmental environments. On the contrary, they employ several strategies – among them is truncation but also compression and temporal re-alignment –to produce the target pitch categories under increasing time pressure. Given that speakers can systematically apply all three adjustment strategies to produce some pitch patterns (H* L% in German and Russian) while not using truncation in others (H+L* L% particularly in Russian), we question the effectiveness of the typological classification of these two languages as ‘truncating’. Moreover, the phonetic detail of truncation varies considerably, both across and within the two languages, indicating that truncation cannot be easily be modeled as a unified phenomenon. The results further suggest that the phrase-final pitch adjustments are crucially sensitive to the phonological composition of the tonal string and the status of a particular tonal event (associated vs. boundary tone), and do not apply to falling vs. rising pitch contours across the board, as previously put forward for German. Implications for the intonational phonology and prosodic typology are addressed in the discussion

    Seeing sentence boundaries: the production and perception of visual markers signalling boundaries in signed languages

    Get PDF
    Current definitions of prosody present a problem for signed languages since they are based on languages that exist in the oral-aural modality. Despite this, researchers have illustrated that although signed languages are produced in a different modality, a prosodic system exists whereby a signed stream can be structured into prosodic constituents and are marked by systematic manual and non-manual phenomena (see Nespor & Sandler, 1999; Wilbur, 1999, 2000). However, there is little research examining prosody in British Sign Language (BSL). This thesis represents the first serious attempt to address this gap in the literature by investigating the type and frequency of a number of visual markers at intonational phrase (IP) boundaries in BSL narratives. An analysis of 418 IP boundaries shows linguistic visual markers are not frequently observed. The most frequent marker observed were single head movements (46%) followed by holds (30%) and brow movement (22%) and head nods (21%). This finding suggests that none of the visual markers included in this study can be considered a consistent marker to IP boundaries in BSL narratives. As well as examining the production of markers at IP boundaries, the perception of boundaries by different groups in a series of online segmentation experiments is investigated. Results from both experiments indicate that boundaries can be identified in a reliable way even when watching an unknown signed language. In addition, an analysis of responses suggests that participants identified a boundary corresponding to a discourse level (such as when a new theme is established). The results suggest that visual markers (to these boundaries at least) are informative in the absence of cues that can only be perceived by native users of a language (such as cues deriving from lexical and grammatical information). Following presentation of results, directions for future research in this area are suggested

    Etude contrastive de la prosodie audio-visuelle des affects sociaux en chinois mandarin vs.français : vers une application pour l'apprentissage de la langue étrangère ou seconde

    Get PDF
    In human face-to-face interaction, social affects should be distinguished from emotional expressions, triggered by innate and involuntary controls of the speaker, by their nature of voluntary controls expressed within the audiovisual prosody and by their important role in the realization of speech acts. They also put into circulation between the interlocutors the social context and social relationship information. The prosody is a main vector of social affects and its cross-language variability is a challenge for language description as well as for foreign language teaching. Thus, cultural and linguistic specificities of the socio-affective prosody in oral communication could be a difficulty, even a risk of misunderstanding, for foreign language and second language learners. This thesis is dedicated to intra- and intercultural studies on perception of the prosody of 19 social affects in Mandarin Chinese and in French, on their cognitive representations, as well as on Chinese and French socio-affective prosody learning for foreign and second language learners. The first task of this thesis concerns the construction of a large audio-visual corpus of Chinese social affects. 152 sentences with the variation of length, tone location and syntactic structures of utterances, have been incorporated with 19 social affects. This corpus is served to examine the identification and perceptual confusion of these Chinese social affects by native and non-native listeners, as well as the tonal effect on non-native subjects' identification. Experimental results reveal that the majority of social affects are similarly perceived by native and non-native subjects, otherwise, some differences are also observed. Lexical tones lead to certain perceptual problems also for Vietnamese listeners (of a tonal language) and for French listeners (of a non-tonal language). In parallel, an acoustic analysis investigates the production side of prosodic socio-affects in Mandarin Chinese, and allows highlighting the more prominent patterns of acoustical variations as well as supporting the perceptual resultants obtained on the same expressions. Then, a study on conceptual and psycho-acoustic distances between social affects is carried out with Chinese and French subjects. The main results indicate that all subjects share to a very large extent the knowledge about these 19 social affects, regardless of their mother language, gender or how to present social affects (concept or acoustic realization). Finally, the last chapter of thesis is dedicated to the differences in the perception of 11 Chinese social affects expressed in different modalities (audio only, video only and audio-visual) for French learners and native subjects, as well as in the perception of the same French socio-affects for Chinese learners and native subjects. According to the results, the identification of affective expressions depends more on their affective values and on their presentation modality. Subject's learning level (beginner or intermediate) does not have a significant effect on their identification.Se distinguant des expressions émotionnelles qui sont innées et déclenchées par un contrôle involontaire du locuteur au sein d'une communication face-à-face, les affects sociaux émergent plutôt de manière volontaire et intentionnelle, et sont largement véhiculés par la prosodie audio-visuelle. Ils mettent en circulation, entre les interactants, des informations sur la dynamique du dialogue, la situation d'énonciation et leur relation sociale. Ces spécificités culturelles et linguistiques de la prosodie socio-affective dans la communication orale constituent une difficulté, même un risque de malentendu, pour les apprenants en langue étrangère (LE) et en langue seconde (L2). La présente thèse se consacre à des études intra- et interculturelles sur la perception de la prosodie de 19 affects sociaux en chinois mandarin et en français, ainsi que sur leurs représentations cognitives. Son but applicatif vise l'apprentissage de la prosodie des affects sociaux en chinois mandarin et en français LE ou L2. Le premier travail de la thèse consiste en la construction d'un large corpus audio-visuel des affects sociaux chinois. 152 énoncés variés dans leur longueur, leur morpho-syntaxe et leur représentation tonale sont respectivement produits dans les 19 affects sociaux. Sur la base de ce corpus, sont examinées l'identification et les confusions perceptives de ces affects sociaux chinois par des natifs, des français et des vietnamiens (comme groupe de référence), ainsi que l'effet du ton lexical sur l'identification auditive des sujets non natifs. Les résultats montrent que la majorité des affects sociaux chinois est perçue de manière similaire par les sujets natifs et les sujets non natifs, cependant certains décalages perceptifs sont également observés. Les tons chinois engendrent des problèmes perceptifs des affects sociaux autant pour les vietnamiens (d'une langue tonale) que pour les français (d'une langue non tonale). En parallèle, une analyse acoustique permet de mettre en évidence les caractéristiques principales de la prosodie socio-affective en chinois et d'étayer les résultats perceptifs. Ensuite, une étude sur les distances conceptuelles d'une part, et psycho-acoustiques d'autre part, entre les affects sociaux est menée auprès de sujets chinois et de sujets français. Les résultats montrent que la plupart des connaissances sur les affects sociaux sont partagées par les sujets, quels que soient leur langue maternelle, leur genre ou la manière de présenter les affects sociaux (concepts ou entrées acoustiques). Enfin, le dernier chapitre de la thèse est consacré à une étude contrastive sur la perception multimodale des affects sociaux en chinois et en français LE ou L2. Il est constaté que la reconnaissance des affects sociaux est étroitement liée aux expressions elles-mêmes et à la modalité de présentation de ces expressions. Le degré d'acquisition de la langue cible du sujet (débutant ou intermédiaire) n'a pas d'impact significatif à la reconnaissance, dans le cadre restreint des niveaux étudiés
    corecore