20,821 research outputs found
Multi-Classification by Using Tri-Class SVM
The standard form for dealing with multi-class classification problems when biclassifiers are used is to consider a two-phase (decomposition, reconstruction) training scheme. The most popular decomposition procedures are pairwise coupling (one versus one, 1-v-1), which considers a learning machine for each Pair of classes, and the one-versus-all scheme (one versus all, 1-v-r), which takes into consideration each class versus the remaining classes. In this article a 1-v-1 tri-class Support Vector Machine (SVM) is presented. The expansion of the architecture of this machine into three categories specifically addresses the decomposition problem of how to prevent the loss of information which occurs in the usual 1-v-1 training procedure. The proposed machine, by means of a third class, allows all the information to be incorporated into the remaining training patterns when a multi-class problem is considered in the form of a 1-v-1 decomposition. Three general structures are presented where each improves some features from the precedent structure. In order to deal with multi-classification problems, it is demonstrated that the final machine proposed allows ordinal regression as a form of decomposition procedure. Examples and experimental results are presented which illustrate the performance of the new tri-class SV machine.Junta de Andalucía ACPAI-2003/014Ministerio de Ciencia y Tecnología TIC2002-04371-C02-0
Masking: A New Perspective of Noisy Supervision
It is important to learn various types of classifiers given training data
with noisy labels. Noisy labels, in the most popular noise model hitherto, are
corrupted from ground-truth labels by an unknown noise transition matrix. Thus,
by estimating this matrix, classifiers can escape from overfitting those noisy
labels. However, such estimation is practically difficult, due to either the
indirect nature of two-step approaches, or not big enough data to afford
end-to-end approaches. In this paper, we propose a human-assisted approach
called Masking that conveys human cognition of invalid class transitions and
naturally speculates the structure of the noise transition matrix. To this end,
we derive a structure-aware probabilistic model incorporating a structure
prior, and solve the challenges from structure extraction and structure
alignment. Thanks to Masking, we only estimate unmasked noise transition
probabilities and the burden of estimation is tremendously reduced. We conduct
extensive experiments on CIFAR-10 and CIFAR-100 with three noise structures as
well as the industrial-level Clothing1M with agnostic noise structure, and the
results show that Masking can improve the robustness of classifiers
significantly.Comment: NIPS 2018 camera-ready versio
Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification
There are a number of studies about extraction of bottleneck (BN) features
from deep neural networks (DNNs)trained to discriminate speakers, pass-phrases
and triphone states for improving the performance of text-dependent speaker
verification (TD-SV). However, a moderate success has been achieved. A recent
study [1] presented a time contrastive learning (TCL) concept to explore the
non-stationarity of brain signals for classification of brain states. Speech
signals have similar non-stationarity property, and TCL further has the
advantage of having no need for labeled data. We therefore present a TCL based
BN feature extraction method. The method uniformly partitions each speech
utterance in a training dataset into a predefined number of multi-frame
segments. Each segment in an utterance corresponds to one class, and class
labels are shared across utterances. DNNs are then trained to discriminate all
speech frames among the classes to exploit the temporal structure of speech. In
addition, we propose a segment-based unsupervised clustering algorithm to
re-assign class labels to the segments. TD-SV experiments were conducted on the
RedDots challenge database. The TCL-DNNs were trained using speech data of
fixed pass-phrases that were excluded from the TD-SV evaluation set, so the
learned features can be considered phrase-independent. We compare the
performance of the proposed TCL bottleneck (BN) feature with those of
short-time cepstral features and BN features extracted from DNNs discriminating
speakers, pass-phrases, speaker+pass-phrase, as well as monophones whose labels
and boundaries are generated by three different automatic speech recognition
(ASR) systems. Experimental results show that the proposed TCL-BN outperforms
cepstral features and speaker+pass-phrase discriminant BN features, and its
performance is on par with those of ASR derived BN features. Moreover,....Comment: Copyright (c) 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
- …