4,020 research outputs found
Information Loss in the Human Auditory System
From the eardrum to the auditory cortex, where acoustic stimuli are decoded,
there are several stages of auditory processing and transmission where
information may potentially get lost. In this paper, we aim at quantifying the
information loss in the human auditory system by using information theoretic
tools.
To do so, we consider a speech communication model, where words are uttered
and sent through a noisy channel, and then received and processed by a human
listener.
We define a notion of information loss that is related to the human word
recognition rate. To assess the word recognition rate of humans, we conduct a
closed-vocabulary intelligibility test. We derive upper and lower bounds on the
information loss. Simulations reveal that the bounds are tight and we observe
that the information loss in the human auditory system increases as the signal
to noise ratio (SNR) decreases. Our framework also allows us to study whether
humans are optimal in terms of speech perception in a noisy environment.
Towards that end, we derive optimal classifiers and compare the human and
machine performance in terms of information loss and word recognition rate. We
observe a higher information loss and lower word recognition rate for humans
compared to the optimal classifiers. In fact, depending on the SNR, the machine
classifier may outperform humans by as much as 8 dB. This implies that for the
speech-in-stationary-noise setup considered here, the human auditory system is
sub-optimal for recognizing noisy words
Explicit Learning Curves for Transduction and Application to Clustering and Compression Algorithms
Inductive learning is based on inferring a general rule from a finite data
set and using it to label new data. In transduction one attempts to solve the
problem of using a labeled training set to label a set of unlabeled points,
which are given to the learner prior to learning. Although transduction seems
at the outset to be an easier task than induction, there have not been many
provably useful algorithms for transduction. Moreover, the precise relation
between induction and transduction has not yet been determined. The main
theoretical developments related to transduction were presented by Vapnik more
than twenty years ago. One of Vapnik's basic results is a rather tight error
bound for transductive classification based on an exact computation of the
hypergeometric tail. While tight, this bound is given implicitly via a
computational routine. Our first contribution is a somewhat looser but explicit
characterization of a slightly extended PAC-Bayesian version of Vapnik's
transductive bound. This characterization is obtained using concentration
inequalities for the tail of sums of random variables obtained by sampling
without replacement. We then derive error bounds for compression schemes such
as (transductive) support vector machines and for transduction algorithms based
on clustering. The main observation used for deriving these new error bounds
and algorithms is that the unlabeled test points, which in the transductive
setting are known in advance, can be used in order to construct useful data
dependent prior distributions over the hypothesis space
On the consistency of Multithreshold Entropy Linear Classifier
Multithreshold Entropy Linear Classifier (MELC) is a recent classifier idea
which employs information theoretic concept in order to create a multithreshold
maximum margin model. In this paper we analyze its consistency over
multithreshold linear models and show that its objective function upper bounds
the amount of misclassified points in a similar manner like hinge loss does in
support vector machines. For further confirmation we also conduct some
numerical experiments on five datasets.Comment: Presented at Theoretical Foundations of Machine Learning 2015
(http://tfml.gmum.net), final version published in Schedae Informaticae
Journa
A review of domain adaptation without target labels
Domain adaptation has become a prominent problem setting in machine learning
and related fields. This review asks the question: how can a classifier learn
from a source domain and generalize to a target domain? We present a
categorization of approaches, divided into, what we refer to as, sample-based,
feature-based and inference-based methods. Sample-based methods focus on
weighting individual observations during training based on their importance to
the target domain. Feature-based methods revolve around on mapping, projecting
and representing features such that a source classifier performs well on the
target domain and inference-based methods incorporate adaptation into the
parameter estimation procedure, for instance through constraints on the
optimization procedure. Additionally, we review a number of conditions that
allow for formulating bounds on the cross-domain generalization error. Our
categorization highlights recurring ideas and raises questions important to
further research.Comment: 20 pages, 5 figure
PAC Classification based on PAC Estimates of Label Class Distributions
A standard approach in pattern classification is to estimate the
distributions of the label classes, and then to apply the Bayes classifier to
the estimates of the distributions in order to classify unlabeled examples. As
one might expect, the better our estimates of the label class distributions,
the better the resulting classifier will be. In this paper we make this
observation precise by identifying risk bounds of a classifier in terms of the
quality of the estimates of the label class distributions. We show how PAC
learnability relates to estimates of the distributions that have a PAC
guarantee on their distance from the true distribution, and we bound the
increase in negative log likelihood risk in terms of PAC bounds on the
KL-divergence. We give an inefficient but general-purpose smoothing method for
converting an estimated distribution that is good under the metric into a
distribution that is good under the KL-divergence.Comment: 14 page
- …