15,738 research outputs found
Training Complex Models with Multi-Task Weak Supervision
As machine learning models continue to increase in complexity, collecting
large hand-labeled training sets has become one of the biggest roadblocks in
practice. Instead, weaker forms of supervision that provide noisier but cheaper
labels are often used. However, these weak supervision sources have diverse and
unknown accuracies, may output correlated labels, and may label different tasks
or apply at different levels of granularity. We propose a framework for
integrating and modeling such weak supervision sources by viewing them as
labeling different related sub-tasks of a problem, which we refer to as the
multi-task weak supervision setting. We show that by solving a matrix
completion-style problem, we can recover the accuracies of these multi-task
sources given their dependency structure, but without any labeled data, leading
to higher-quality supervision for training an end model. Theoretically, we show
that the generalization error of models trained with this approach improves
with the number of unlabeled data points, and characterize the scaling with
respect to the task and dependency structures. On three fine-grained
classification problems, we show that our approach leads to average gains of
20.2 points in accuracy over a traditional supervised approach, 6.8 points over
a majority vote baseline, and 4.1 points over a previously proposed weak
supervision method that models tasks separately
Structured Sparsity Models for Multiparty Speech Recovery from Reverberant Recordings
We tackle the multi-party speech recovery problem through modeling the
acoustic of the reverberant chambers. Our approach exploits structured sparsity
models to perform room modeling and speech recovery. We propose a scheme for
characterizing the room acoustic from the unknown competing speech sources
relying on localization of the early images of the speakers by sparse
approximation of the spatial spectra of the virtual sources in a free-space
model. The images are then clustered exploiting the low-rank structure of the
spectro-temporal components belonging to each source. This enables us to
identify the early support of the room impulse response function and its unique
map to the room geometry. To further tackle the ambiguity of the reflection
ratios, we propose a novel formulation of the reverberation model and estimate
the absorption coefficients through a convex optimization exploiting joint
sparsity model formulated upon spatio-spectral sparsity of concurrent speech
representation. The acoustic parameters are then incorporated for separating
individual speech signals through either structured sparse recovery or inverse
filtering the acoustic channels. The experiments conducted on real data
recordings demonstrate the effectiveness of the proposed approach for
multi-party speech recovery and recognition.Comment: 31 page
Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data
Subsequence clustering of multivariate time series is a useful tool for
discovering repeated patterns in temporal data. Once these patterns have been
discovered, seemingly complicated datasets can be interpreted as a temporal
sequence of only a small number of states, or clusters. For example, raw sensor
data from a fitness-tracking application can be expressed as a timeline of a
select few actions (i.e., walking, sitting, running). However, discovering
these patterns is challenging because it requires simultaneous segmentation and
clustering of the time series. Furthermore, interpreting the resulting clusters
is difficult, especially when the data is high-dimensional. Here we propose a
new method of model-based clustering, which we call Toeplitz Inverse
Covariance-based Clustering (TICC). Each cluster in the TICC method is defined
by a correlation network, or Markov random field (MRF), characterizing the
interdependencies between different observations in a typical subsequence of
that cluster. Based on this graphical representation, TICC simultaneously
segments and clusters the time series data. We solve the TICC problem through
alternating minimization, using a variation of the expectation maximization
(EM) algorithm. We derive closed-form solutions to efficiently solve the two
resulting subproblems in a scalable way, through dynamic programming and the
alternating direction method of multipliers (ADMM), respectively. We validate
our approach by comparing TICC to several state-of-the-art baselines in a
series of synthetic experiments, and we then demonstrate on an automobile
sensor dataset how TICC can be used to learn interpretable clusters in
real-world scenarios.Comment: This revised version fixes two small typos in the published versio
Compressive Nonparametric Graphical Model Selection For Time Series
We propose a method for inferring the conditional indepen- dence graph (CIG)
of a high-dimensional discrete-time Gaus- sian vector random process from
finite-length observations. Our approach does not rely on a parametric model
(such as, e.g., an autoregressive model) for the vector random process; rather,
it only assumes certain spectral smoothness proper- ties. The proposed
inference scheme is compressive in that it works for sample sizes that are
(much) smaller than the number of scalar process components. We provide
analytical conditions for our method to correctly identify the CIG with high
probability.Comment: to appear in Proc. IEEE ICASSP 201
Automatic Throughput and Critical Path Analysis of x86 and ARM Assembly Kernels
Useful models of loop kernel runtimes on out-of-order architectures require
an analysis of the in-core performance behavior of instructions and their
dependencies. While an instruction throughput prediction sets a lower bound to
the kernel runtime, the critical path defines an upper bound. Such predictions
are an essential part of analytic (i.e., white-box) performance models like the
Roofline and Execution-Cache-Memory (ECM) models. They enable a better
understanding of the performance-relevant interactions between hardware
architecture and loop code. The Open Source Architecture Code Analyzer (OSACA)
is a static analysis tool for predicting the execution time of sequential
loops. It previously supported only x86 (Intel and AMD) architectures and
simple, optimistic full-throughput execution. We have heavily extended OSACA to
support ARM instructions and critical path prediction including the detection
of loop-carried dependencies, which turns it into a versatile
cross-architecture modeling tool. We show runtime predictions for code on Intel
Cascade Lake, AMD Zen, and Marvell ThunderX2 micro-architectures based on
machine models from available documentation and semi-automatic benchmarking.
The predictions are compared with actual measurements.Comment: 6 pages, 3 figure
- âŠ