1,022 research outputs found
Recommended from our members
COMPACT REPRESENTATIONS OF UNCERTAINTY IN CLUSTERING
Flat clustering and hierarchical clustering are two fundamental tasks, often used to discover meaningful structures in data, such as subtypes of cancer, phylogenetic relationships, taxonomies of concepts, and cascades of particle decays in particle physics. When multiple clusterings of the data are possible, it is useful to represent uncertainty in clustering through various probabilistic quantities, such as the distribution over partitions or tree structures, and the marginal probabilities of subpartitions or subtrees.
Many compact representations exist for structured prediction problems, enabling the efficient computation of probability distributions, e.g., a trellis structure and corresponding Forward-Backward algorithm for Markov models that model sequences. However, no such representation has been proposed for either flat or hierarchical clustering models. In this thesis, we present our work developing data structures and algorithms for computing probability distributions over flat and hierarchical clusterings, as well as for finding maximum a posteriori (MAP) flat and hierarchical clusterings, and various marginal probabilities, as given by a wide range of energy-based clustering models.
First, we describe a trellis structure that compactly represents distributions over flat or hierarchical clusterings. We also describe related data structures that represent approximate distributions. We then present algorithms that, using these structures, allow us to compute the partition function, MAP clustering, and the marginal proba- bilities of a cluster (and sub-hierarchy, in the case of hierarchical clustering) exactly. We also show how these and related algorithms can be used to approximate these values, and analyze the time and space complexity of our proposed methods. We demonstrate the utility of our approaches using various synthetic data of interest as well as in two real world applications, namely particle physics at the Large Hadron Collider at CERN and in cancer genomics. We conclude with a brief discussion of future work
Recommended from our members
H-Diplo Roundtable XXI-28 on Tworek. News from Germany: the competition to control world communications, 1900-1945
No description supplie
Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition
Previous work has established that a person's demographics and speech style
affect how well speech processing models perform for them. But where does this
bias come from? In this work, we present the Speech Embedding Association Test
(SpEAT), a method for detecting bias in one type of model used for many speech
tasks: pre-trained models. The SpEAT is inspired by word embedding association
tests in natural language processing, which quantify intrinsic bias in a
model's representations of different concepts, such as race or valence
(something's pleasantness or unpleasantness) and capture the extent to which a
model trained on large-scale socio-cultural data has learned human-like biases.
Using the SpEAT, we test for six types of bias in 16 English speech models
(including 4 models also trained on multilingual data), which come from the
wav2vec 2.0, HuBERT, WavLM, and Whisper model families. We find that 14 or more
models reveal positive valence (pleasantness) associations with abled people
over disabled people, with European-Americans over African-Americans, with
females over males, with U.S. accented speakers over non-U.S. accented
speakers, and with younger people over older people. Beyond establishing that
pre-trained speech models contain these biases, we also show that they can have
real world effects. We compare biases found in pre-trained models to biases in
downstream models adapted to the task of Speech Emotion Recognition (SER) and
find that in 66 of the 96 tests performed (69%), the group that is more
associated with positive valence as indicated by the SpEAT also tends to be
predicted as speaking with higher valence by the downstream model. Our work
provides evidence that, like text and image-based models, pre-trained speech
based-models frequently learn human-like biases. Our work also shows that bias
found in pre-trained models can propagate to the downstream task of SER
Extending Explainable Boosting Machines to Scientific Image Data
As the deployment of computer vision technology becomes increasingly common
in science, the need for explanations of the system and its output has become a
focus of great concern. Driven by the pressing need for interpretable models in
science, we propose the use of Explainable Boosting Machines (EBMs) for
scientific image data. Inspired by an important application underpinning the
development of quantum technologies, we apply EBMs to cold-atom soliton image
data tabularized using Gabor Wavelet Transform-based techniques that preserve
the spatial structure of the data. In doing so, we demonstrate the use of EBMs
for image data for the first time and show that our approach provides
explanations that are consistent with human intuition about the data.Comment: 7 pages, 2 figure
Assessing Medical Students’, Residents’, and the Public's Perceptions of the Uses of Personal Digital Assistants
Although medical schools are encouraging the use of personal digital assistants (PDAs), there have been few investigations of attitudes toward their use by students or residents and only one investigation of the public's attitude toward their use by physicians. In 2006, the University of Louisville School of Medicine surveyed 121 third- and fourth-year medical students, 53 residents, and 51 members of the non-medical public about their attitudes toward PDAs. Students were using either the Palm i705 or the Dell Axim X50v; residents were using devices they selected themselves (referred to in the study generically as PDAs). Three survey instruments were designed to investigate attitudes of (a) third- and fourth-year medical students on clinical rotations, (b) Internal Medicine and Pediatrics residents, and (c) volunteer members of the public found in the waiting rooms of three university practice clinics. Both residents and medical students found their devices useful, with more residents (46.8%) than students (16.2%) (p < 0.001) rating PDAs “very useful.” While students and residents generally agreed that PDAs improved the quality of their learning, residents’ responses were significantly higher (p < 0.05) than students’. Residents also responded more positively than students that PDAs made them more effective as clinicians. Although members of the public were generally supportive of PDA use, they appeared to have some misconceptions about how and why physicians were using them. The next phase of research will be to refine the research questions and survey instruments in collaboration with another medical school
LineConGraphs: Line Conversation Graphs for Effective Emotion Recognition using Graph Neural Networks
Emotion Recognition in Conversations (ERC) is a critical aspect of affective
computing, and it has many practical applications in healthcare, education,
chatbots, and social media platforms. Earlier approaches for ERC analysis
involved modeling both speaker and long-term contextual information using graph
neural network architectures. However, it is ideal to deploy
speaker-independent models for real-world applications. Additionally, long
context windows can potentially create confusion in recognizing the emotion of
an utterance in a conversation. To overcome these limitations, we propose novel
line conversation graph convolutional network (LineConGCN) and graph attention
(LineConGAT) models for ERC analysis. These models are speaker-independent and
built using a graph construction strategy for conversations -- line
conversation graphs (LineConGraphs). The conversational context in
LineConGraphs is short-term -- limited to one previous and future utterance,
and speaker information is not part of the graph. We evaluate the performance
of our proposed models on two benchmark datasets, IEMOCAP and MELD, and show
that our LineConGAT model outperforms the state-of-the-art methods with an
F1-score of 64.58% and 76.50%. Moreover, we demonstrate that embedding
sentiment shift information into line conversation graphs further enhances the
ERC performance in the case of GCN models.Comment: 13 pages, 6 figure
- …