14,170 research outputs found
Differential Performance Debugging with Discriminant Regression Trees
Differential performance debugging is a technique to find performance
problems. It applies in situations where the performance of a program is
(unexpectedly) different for different classes of inputs. The task is to
explain the differences in asymptotic performance among various input classes
in terms of program internals. We propose a data-driven technique based on
discriminant regression tree (DRT) learning problem where the goal is to
discriminate among different classes of inputs. We propose a new algorithm for
DRT learning that first clusters the data into functional clusters, capturing
different asymptotic performance classes, and then invokes off-the-shelf
decision tree learning algorithms to explain these clusters. We focus on linear
functional clusters and adapt classical clustering algorithms (K-means and
spectral) to produce them. For the K-means algorithm, we generalize the notion
of the cluster centroid from a point to a linear function. We adapt spectral
clustering by defining a novel kernel function to capture the notion of linear
similarity between two data points. We evaluate our approach on benchmarks
consisting of Java programs where we are interested in debugging performance.
We show that our algorithm significantly outperforms other well-known
regression tree learning algorithms in terms of running time and accuracy of
classification.Comment: To Appear in AAAI 201
Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification
There are a number of studies about extraction of bottleneck (BN) features
from deep neural networks (DNNs)trained to discriminate speakers, pass-phrases
and triphone states for improving the performance of text-dependent speaker
verification (TD-SV). However, a moderate success has been achieved. A recent
study [1] presented a time contrastive learning (TCL) concept to explore the
non-stationarity of brain signals for classification of brain states. Speech
signals have similar non-stationarity property, and TCL further has the
advantage of having no need for labeled data. We therefore present a TCL based
BN feature extraction method. The method uniformly partitions each speech
utterance in a training dataset into a predefined number of multi-frame
segments. Each segment in an utterance corresponds to one class, and class
labels are shared across utterances. DNNs are then trained to discriminate all
speech frames among the classes to exploit the temporal structure of speech. In
addition, we propose a segment-based unsupervised clustering algorithm to
re-assign class labels to the segments. TD-SV experiments were conducted on the
RedDots challenge database. The TCL-DNNs were trained using speech data of
fixed pass-phrases that were excluded from the TD-SV evaluation set, so the
learned features can be considered phrase-independent. We compare the
performance of the proposed TCL bottleneck (BN) feature with those of
short-time cepstral features and BN features extracted from DNNs discriminating
speakers, pass-phrases, speaker+pass-phrase, as well as monophones whose labels
and boundaries are generated by three different automatic speech recognition
(ASR) systems. Experimental results show that the proposed TCL-BN outperforms
cepstral features and speaker+pass-phrase discriminant BN features, and its
performance is on par with those of ASR derived BN features. Moreover,....Comment: Copyright (c) 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
The 2005 AMI system for the transcription of speech in meetings
In this paper we describe the 2005 AMI system for the transcription\ud
of speech in meetings used for participation in the 2005 NIST\ud
RT evaluations. The system was designed for participation in the speech\ud
to text part of the evaluations, in particular for transcription of speech\ud
recorded with multiple distant microphones and independent headset\ud
microphones. System performance was tested on both conference room\ud
and lecture style meetings. Although input sources are processed using\ud
different front-ends, the recognition process is based on a unified system\ud
architecture. The system operates in multiple passes and makes use\ud
of state of the art technologies such as discriminative training, vocal\ud
tract length normalisation, heteroscedastic linear discriminant analysis,\ud
speaker adaptation with maximum likelihood linear regression and minimum\ud
word error rate decoding. In this paper we describe the system performance\ud
on the official development and test sets for the NIST RT05s\ud
evaluations. The system was jointly developed in less than 10 months\ud
by a multi-site team and was shown to achieve very competitive performance
Continuous Action Recognition Based on Sequence Alignment
Continuous action recognition is more challenging than isolated recognition
because classification and segmentation must be simultaneously carried out. We
build on the well known dynamic time warping (DTW) framework and devise a novel
visual alignment technique, namely dynamic frame warping (DFW), which performs
isolated recognition based on per-frame representation of videos, and on
aligning a test sequence with a model sequence. Moreover, we propose two
extensions which enable to perform recognition concomitant with segmentation,
namely one-pass DFW and two-pass DFW. These two methods have their roots in the
domain of continuous recognition of speech and, to the best of our knowledge,
their extension to continuous visual action recognition has been overlooked. We
test and illustrate the proposed techniques with a recently released dataset
(RAVEL) and with two public-domain datasets widely used in action recognition
(Hollywood-1 and Hollywood-2). We also compare the performances of the proposed
isolated and continuous recognition algorithms with several recently published
methods
Joint Intermodal and Intramodal Label Transfers for Extremely Rare or Unseen Classes
In this paper, we present a label transfer model from texts to images for
image classification tasks. The problem of image classification is often much
more challenging than text classification. On one hand, labeled text data is
more widely available than the labeled images for classification tasks. On the
other hand, text data tends to have natural semantic interpretability, and they
are often more directly related to class labels. On the contrary, the image
features are not directly related to concepts inherent in class labels. One of
our goals in this paper is to develop a model for revealing the functional
relationships between text and image features as to directly transfer
intermodal and intramodal labels to annotate the images. This is implemented by
learning a transfer function as a bridge to propagate the labels between two
multimodal spaces. However, the intermodal label transfers could be undermined
by blindly transferring the labels of noisy texts to annotate images. To
mitigate this problem, we present an intramodal label transfer process, which
complements the intermodal label transfer by transferring the image labels
instead when relevant text is absent from the source corpus. In addition, we
generalize the inter-modal label transfer to zero-shot learning scenario where
there are only text examples available to label unseen classes of images
without any positive image examples. We evaluate our algorithm on an image
classification task and show the effectiveness with respect to the other
compared algorithms.Comment: The paper has been accepted by IEEE Transactions on Pattern Analysis
and Machine Intelligence. It will apear in a future issu
- âŠ