322 research outputs found

    Decolonial Potential in a Multilingual FYC

    Get PDF
    Scholars in rhetoric and composition have questioned to what extent the field can be decolonial because of the gatekeeping role that writing plays in the university. This article examines the decolonial potential of implementing multilingual practices in first-year composition (fyc), enacting what Walter Mignolo calls “epistemic disobedience” by complicating the primacy of English as the language of knowledge-building. I describe a Spanish-English “bilingual” fyc course offered at a private university with a Jesuit Catholic heritage. The course is characterized by a translanguaging approach in which Spanish is presented as a valid language for academic writing. The students’ writing highlights the enduring influence of colonialism in the form of monolingual ideology within the linguistically diverse geographical context of Silicon Valley, where the potential of decolonial practices are tempered by the economic power of the tech industry and its hiring practices, which have resulted in a low number of employed women and minorities in comparison to both national employment levels and diversity within the region

    A review of affective computing: From unimodal analysis to multimodal fusion

    Get PDF
    Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field

    The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 Languages

    Full text link
    Instruction tuned large language models (LLMs), such as ChatGPT, demonstrate remarkable performance in a wide range of tasks. Despite numerous recent studies that examine the performance of instruction-tuned LLMs on various NLP benchmarks, there remains a lack of comprehensive investigation into their ability to understand cross-lingual sociopragmatic meaning (SM), i.e., meaning embedded within social and interactive contexts. This deficiency arises partly from SM not being adequately represented in any of the existing benchmarks. To address this gap, we present SPARROW, an extensive multilingual benchmark specifically designed for SM understanding. SPARROW comprises 169 datasets covering 13 task types across six primary categories (e.g., anti-social language detection, emotion recognition). SPARROW datasets encompass 64 different languages originating from 12 language families representing 16 writing scripts. We evaluate the performance of various multilingual pretrained language models (e.g., mT5) and instruction-tuned LLMs (e.g., BLOOMZ, ChatGPT) on SPARROW through fine-tuning, zero-shot, and/or few-shot learning. Our comprehensive analysis reveals that existing open-source instruction tuned LLMs still struggle to understand SM across various languages, performing close to a random baseline in some cases. We also find that although ChatGPT outperforms many LLMs, it still falls behind task-specific finetuned models with a gap of 12.19 SPARROW score. Our benchmark is available at: https://github.com/UBC-NLP/SPARROWComment: Accepted by EMNLP 2023 Main conferenc

    Multimodality In Tourism Websites: Asheville Versus Charlotte

    Get PDF
    Tourism websites promote and enable the potential tourist to obtain information about a particular destination. Their role in facilitating communication and trade between the destination and the potential tourist lend them to study in the field of professional writing. Tourism websites for North Carolina cities have not been analyzed at all and little research has focused on multimodality in tourism websites. According to Cynthia Selfe and Pamela Takayoshias’ definition that will be used in this paper, multimodal texts “exceed the alphabetic and may include still and moving images, animations, color, words, music, and sound” (as cited in Lauer, 2009). The following study addresses how multimodality functions in the official tourism websites of Asheville, NC (exploreasheville.com) and Charlotte, NC (charlottesgotalot.com). To do this, the data were analyzed according to (1) multimodality types, (2) Brown’s (2017) motion techniques, and (3) Kress and van Leeuwen’s (2006) realizations. The study concludes that both websites employ multimodality in different ways. The pages with the highest and most diverse forms of multimodality are pages of importance. These pages emphasize the cities’ special attractions, like the culinary and music scenes for Asheville and the sports scene for Charlotte

    Cultura-Inspired Intercultural Exchanges: Focus on Asian and Pacific Languages

    Get PDF
    Although many online intercultural exchanges have been conducted based on the groundbreaking Cultura model, most to date have been between and among European languages. This volume presents several chapters with a focus on exchanges involving Asian and Pacific languages. Many of the benefits and challenges of these exchanges are similar to those reported for European languages; however, some of the difficulties reported in the Chinese and Japanese exchanges might be due to the significant linguistic differences between English and East Asian languages. This volume adds to the body of emerging studies of telecollaboration among learners of Asian and Pacific languages

    Interaction, authenticity and spoken corpora : building teaching materials for adult English language learners

    Get PDF
    This study investigated the needs and challenges of adult ELLs in the community college setting in the United States. The study was conducted in Western North Carolina (WNC), where administrators, teachers, and students of three different community colleges were interviewed. Interviews determined the needs and challenges of this group of learners, the language skills they are most interested in acquiring, and how effective current teaching materials are in helping meet their needs. Interviews were transcribed to detect patterns in participant responses. The learners were primarily interested in increasing speaking and listening skills so that that they could communicate in their communities in situations they encounter on a regular basis. Results from the interviews, as well as extensive research conducted regarding effective ELT for adult ELLs helped establish criteria for teaching materials that would be beneficial for this group of learners, specifically with speaking and listening. Textbook evaluations were created and applied to three textbooks that were commonly used within all of the community colleges that participated in the study. The evaluations were also applied to a corpus-based textbook that was created using the Cambridge Corpus of Spoken North American English, which focuses on spoken English, to compare with the other textbooks. The findings of this study suggest that there needs to be teaching materials, both textbook and computer programs, created for this group of learners that focus on the qualities of native spoken English within situations that adult ELLs in the U.S. will encounter in their day to day lives

    The Perception of Emotion from Acoustic Cues in Natural Speech

    Get PDF
    Knowledge of human perception of emotional speech is imperative for the development of emotion in speech recognition systems and emotional speech synthesis. Owing to the fact that there is a growing trend towards research on spontaneous, real-life data, the aim of the present thesis is to examine human perception of emotion in naturalistic speech. Although there are many available emotional speech corpora, most contain simulated expressions. Therefore, there remains a compelling need to obtain naturalistic speech corpora that are appropriate and freely available for research. In that regard, our initial aim was to acquire suitable naturalistic material and examine its emotional content based on listener perceptions. A web-based listening tool was developed to accumulate ratings based on large-scale listening groups. The emotional content present in the speech material was demonstrated by performing perception tests on conveyed levels of Activation and Evaluation. As a result, labels were determined that signified the emotional content, and thus contribute to the construction of a naturalistic emotional speech corpus. In line with the literature, the ratings obtained from the perception tests suggested that Evaluation (or hedonic valence) is not identified as reliably as Activation is. Emotional valence can be conveyed through both semantic and prosodic information, for which the meaning of one may serve to facilitate, modify, or conflict with the meaning of the other—particularly with naturalistic speech. The subsequent experiments aimed to investigate this concept by comparing ratings from perception tests of non-verbal speech with verbal speech. The method used to render non-verbal speech was low-pass filtering, and for this, suitable filtering conditions were determined by carrying out preliminary perception tests. The results suggested that nonverbal naturalistic speech provides sufficiently discernible levels of Activation and Evaluation. It appears that the perception of Activation and Evaluation is affected by low-pass filtering, but that the effect is relatively small. Moreover, the results suggest that there is a similar trend in agreement levels between verbal and non-verbal speech. To date it still remains difficult to determine unique acoustical patterns for hedonic valence of emotion, which may be due to inadequate labels or the incorrect selection of acoustic parameters. This study has implications for the labelling of emotional speech data and the determination of salient acoustic correlates of emotion
    • 

    corecore