1,022 research outputs found

    Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition

    Full text link
    Previous work has established that a person's demographics and speech style affect how well speech processing models perform for them. But where does this bias come from? In this work, we present the Speech Embedding Association Test (SpEAT), a method for detecting bias in one type of model used for many speech tasks: pre-trained models. The SpEAT is inspired by word embedding association tests in natural language processing, which quantify intrinsic bias in a model's representations of different concepts, such as race or valence (something's pleasantness or unpleasantness) and capture the extent to which a model trained on large-scale socio-cultural data has learned human-like biases. Using the SpEAT, we test for six types of bias in 16 English speech models (including 4 models also trained on multilingual data), which come from the wav2vec 2.0, HuBERT, WavLM, and Whisper model families. We find that 14 or more models reveal positive valence (pleasantness) associations with abled people over disabled people, with European-Americans over African-Americans, with females over males, with U.S. accented speakers over non-U.S. accented speakers, and with younger people over older people. Beyond establishing that pre-trained speech models contain these biases, we also show that they can have real world effects. We compare biases found in pre-trained models to biases in downstream models adapted to the task of Speech Emotion Recognition (SER) and find that in 66 of the 96 tests performed (69%), the group that is more associated with positive valence as indicated by the SpEAT also tends to be predicted as speaking with higher valence by the downstream model. Our work provides evidence that, like text and image-based models, pre-trained speech based-models frequently learn human-like biases. Our work also shows that bias found in pre-trained models can propagate to the downstream task of SER

    Extending Explainable Boosting Machines to Scientific Image Data

    Full text link
    As the deployment of computer vision technology becomes increasingly common in science, the need for explanations of the system and its output has become a focus of great concern. Driven by the pressing need for interpretable models in science, we propose the use of Explainable Boosting Machines (EBMs) for scientific image data. Inspired by an important application underpinning the development of quantum technologies, we apply EBMs to cold-atom soliton image data tabularized using Gabor Wavelet Transform-based techniques that preserve the spatial structure of the data. In doing so, we demonstrate the use of EBMs for image data for the first time and show that our approach provides explanations that are consistent with human intuition about the data.Comment: 7 pages, 2 figure

    Assessing Medical Students’, Residents’, and the Public's Perceptions of the Uses of Personal Digital Assistants

    Get PDF
    Although medical schools are encouraging the use of personal digital assistants (PDAs), there have been few investigations of attitudes toward their use by students or residents and only one investigation of the public's attitude toward their use by physicians. In 2006, the University of Louisville School of Medicine surveyed 121 third- and fourth-year medical students, 53 residents, and 51 members of the non-medical public about their attitudes toward PDAs. Students were using either the Palm i705 or the Dell Axim X50v; residents were using devices they selected themselves (referred to in the study generically as PDAs). Three survey instruments were designed to investigate attitudes of (a) third- and fourth-year medical students on clinical rotations, (b) Internal Medicine and Pediatrics residents, and (c) volunteer members of the public found in the waiting rooms of three university practice clinics. Both residents and medical students found their devices useful, with more residents (46.8%) than students (16.2%) (p < 0.001) rating PDAs “very useful.” While students and residents generally agreed that PDAs improved the quality of their learning, residents’ responses were significantly higher (p < 0.05) than students’. Residents also responded more positively than students that PDAs made them more effective as clinicians. Although members of the public were generally supportive of PDA use, they appeared to have some misconceptions about how and why physicians were using them. The next phase of research will be to refine the research questions and survey instruments in collaboration with another medical school

    LineConGraphs: Line Conversation Graphs for Effective Emotion Recognition using Graph Neural Networks

    Full text link
    Emotion Recognition in Conversations (ERC) is a critical aspect of affective computing, and it has many practical applications in healthcare, education, chatbots, and social media platforms. Earlier approaches for ERC analysis involved modeling both speaker and long-term contextual information using graph neural network architectures. However, it is ideal to deploy speaker-independent models for real-world applications. Additionally, long context windows can potentially create confusion in recognizing the emotion of an utterance in a conversation. To overcome these limitations, we propose novel line conversation graph convolutional network (LineConGCN) and graph attention (LineConGAT) models for ERC analysis. These models are speaker-independent and built using a graph construction strategy for conversations -- line conversation graphs (LineConGraphs). The conversational context in LineConGraphs is short-term -- limited to one previous and future utterance, and speaker information is not part of the graph. We evaluate the performance of our proposed models on two benchmark datasets, IEMOCAP and MELD, and show that our LineConGAT model outperforms the state-of-the-art methods with an F1-score of 64.58% and 76.50%. Moreover, we demonstrate that embedding sentiment shift information into line conversation graphs further enhances the ERC performance in the case of GCN models.Comment: 13 pages, 6 figure
    • …
    corecore