22 research outputs found

    Causality influences speech about manner of motion in Italian

    Get PDF
    Different languages express manner and path of motion in distinct ways. Some languages, such as English, express manner and path of motion in a single clause. They are called Satellite-framed languages. Other languages, called Verb-framed languages (e.g., Italian), usually convey manner and path of motion into two separate clauses. Previous studies on English showed that when the manner of motion caused the path movement (manner-causal), speakers used the Satellite-framed construction typical of their language. However, English speakers used more Verb-framed clauses when the manner of motion did not cause the path of motion (manner-incidental). This study tests if Italian speakers would use more Satellite-framed verbs with manner-causal or manner-incidental events. Our results showed that Italian speakers were more likely to produce Satellite-framed verbs with manner-causal than manner-incidental motion events, providing evidence against the language relativity hypothesis.

    Linee guida e raccomandazioni per l’insegnamento della pronuncia della lingua straniera

    Get PDF
    An intelligible speech (i.e., speech that can be easily understood by an interlocutor) is a realistic target for learners of a foreign language, surely more than speech without any accent. This paper reviews the most recent research on the perception and production of both segmental (e.g., speech sounds) and suprasegmental (e.g., accent, rhythm, tone, intonation) characteristics by speakers of a second language (L2) learned in a classroom. Researchers and teachers have suggested numerous ways to apply technology to facilitate the learning of L2 pronunciation. However, many teachers still feel insecure about methods of teaching pronunciation, and the idea of using computers, mobile devices, or other technologies in the classroom can seem sometimes intimidating. In this paper, we will look at technology by focusing on pedagogical tasks, choosing the most effective support tools to achieve the best results in the classroom.Un parlato intelligibile (cioè un parlato che può essere facilmente comprensibile da un interlocutore) è un obiettivo più realistico per gli studenti di una lingua straniera che un parlato privo di qualsiasi accento. Questo contributo revisiona le ricerche più recenti sulla percezione e produzione delle caratteristiche sia segmentali (es. suoni del linguaggio) che soprasegmentali (es. accento, ritmo, tono, intonazione) da parte di parlanti di una seconda lingua (L2) imparata in classe. Ricercatori e insegnanti hanno suggerito numerosi modi per applicare la tecnologia all’insegnamento della L2 e facilitare l’apprendimento della pronuncia. Tuttavia, molti insegnanti si sentono ancora insicuri sui metodi per insegnare la pronuncia e l’idea di usare computer, dispositivi mobili o altre tecnologie può sembrare doppiamente intimidatorio. Se guardiamo alla tecnologia concentrandoci sui compiti pedagogici e poi sulla scelta degli strumenti più efficaci di supporto per ognuno, possiamo ottenere i migliori risultati sia per gli insegnanti che per gli studenti

    ARE EMOTIONAL DISPLAYS AN EVOLUTIONARY PRECURSOR TO COMPOSITIONALITY IN LANGUAGE?

    Get PDF
    Compositionality is a basic property of language, spoken and signed, according to which the meaning of a complex structure is determined by the meanings of its constituents and the way they combine (e.g., Jackendoff, 2011 for spoken language; Sandler 2012 for constituents conveyed by face and body signals in sign language; Kirby & Smith, 2012 for emergence of compositionality). Here we seek the foundations of this property in a more basic, and presumably prior, form of communication: the spontaneous expression of emotion. To this end, we ask whether features of facial expressions and body postures are combined and recombined to convey different complex meanings in extreme displays of emotions. There is evidence that facial expressions are processed in a compositional fashion (Chen & Chen, 2010). In addition, facial components such as nose wrinkles or eye opening elicit systematic confusion while decoding facial expressions of disgust and anger and fear and surprise, respectively (Jack et al., 2014), suggesting that other co-occurring signals contribute to their interpretation. In spontaneous emotional displays of athletes, the body – and not the face – better predicts participants’ correct assessments of victory and loss pictures, as conveying positive or negative emotions (Aviezer et al., 2012), suggesting at least that face and body make different contributions to interpretations of the displays. Taken together, such studies lead to the hypothesis that emotional displays are compositional - that each signal component, or possibly specific clusters of components (Du et al., 2014), may have their own interpretations, and make a contribution to the complex meaning of the whole. On the assumption that emotional displays are older than language in evolution, our research program aims to determine whether the crucial property of compositionality is indeed present in communicative displays of emotion

    The Modulation of Cooperation and Emotion in Dialogue: The REC Corpus

    Get PDF
    In this paper we describe the Rovereto Emotive Corpus (REC) which we collected to investigate the relationship between emotion and cooperation in dialogue tasks. It is an area where still many unsolved questions are present. One of the main open issues is the annotation of the socalled “blended ” emotions and their recognition. Usually, there is a low agreement among raters in annotating emotions and, surprisingly, emotion recognition is higher in a condition of modality deprivation (i. e. only acoustic or only visual modality vs. bimodal display of emotion). Because of these previous results, we collected a corpus in which “emotive ” tokens are pointed out during the recordings by psychophysiological indexes (ElectroCardioGram, and Galvanic Skin Conductance). From the output values of these indexes a general recognition of each emotion arousal is allowed. After this selection we will annotate emotive interactions with our multimodal annotation scheme, performing a kappa statistic on annotation results to validate our coding scheme. In the near future, a logistic regression on annotated data will be performed to find out correlations between cooperation and negative emotions. A final step will be an fMRI experiment on emotion recognition of blended emotions from face displays.

    Computational Modeling of (un)Cooperation: The Role of Emotions

    Get PDF
    The philosopher H. P. Grice was the first to highlight the extent to which our ability to communicate effectively depends on speakers acting cooperatively. This tendency to cooperation in language use, recognized since Grice’s William James lectures, has been a key tenet of subsequent theorizing in pragmatics. Yet it’s also clear that there are limits to the extent to which people cooperate: theoretical and empirical studies of the Prisoner’s Dilemma have shown that people prefer to cooperate if the other party cooperates, but not otherwise. This would suggest that in language use, as well, the level of cooperation depends on the other person’s cooperativeness. So far, however, it has proven remarkably difficult to test such prediction, because it is difficult to analyze cooperation and communicative style objectively, and the schemes proposed so far for, e.g., non-verbal cues to cooperation tend to have low reliability. In this study the existence of a negative correlation between emotions and linguistic cooperation is demonstrated for the first time, thanks to newly developed methods for analyzing cooperation and facial expressions. The heart rate and facial expressions of the participants in a cooperative task were recorded after uses of cooperative and uncooperative language; facial expressions and the level of linguistic cooperation in each utterance were classified with high reliability. As predicted, very high negative correlations were observed between heart rate and cooperation, and the facial expressions were found to be highly predictive of her level of cooperation. Our results shed light on a crucial aspect of communication, and our methods may be usable to research in other aspects of human interaction as well

    Causality influences speech about manner of motion in Italian

    No full text
    Different languages express manner and path of motion in distinct ways. Some languages, such as English, express manner and path of motion in a single clause. They are called Satellite-framed languages. Other languages, called Verb-framed languages (e.g., Italian), usually convey manner and path of motion into two separate clauses. Previous studies on English showed that when the manner of motion caused the path movement (manner-causal), speakers used the Satellite-framed construction typical of their language. However, English speakers used more Verb-framed clauses when the manner of motion did not cause the path of motion (manner-incidental). This study tests if Italian speakers would use more Satellite-framed verbs with manner-causal or manner-incidental events. Our results showed that Italian speakers were more likely to produce Satellite-framed verbs with manner-causal than manner-incidental motion events, providing evidence against the language relativity hypothesis

    L’espressione delle emozioni in chat, forum ed e-learning

    No full text
    Il lavoro analizza alcune forme di comunicazione tipiche dell'interazione mediata, fra cui gli emoticons, i puntini, i punti esclamativi, che suppliscono in queste forme di linguaggio alla mancanza degli elmenti prosodici e intonativi che nel parlato veicolano le informazioni emotive, mostrandone le differenze e somiglianze d'uso nella comunicazioen sincrona e asincron

    Modifications of Speech Articulatory Characteristics in the Emotive Speech

    No full text
    Abstract The aim of the research is the phonetic articulatory description of emotive speech achievable studying the labial movements, which are the product of the compliance with both the phonetic-phonological constraints and the lip configuration required for the visual encoding of emotions. In this research we analyse the interaction between labial configurations, peculiar to six emotions (anger, disgust, joy, fear, surprise and sadness), and the articulatory lip movements defined by phonetic-phonological rules, specific to the vowel /’a / and consonants /b / and /v/.
    corecore