4,287 research outputs found
Meta learning of bounds on the Bayes classifier error
Meta learning uses information from base learners (e.g. classifiers or
estimators) as well as information about the learning problem to improve upon
the performance of a single base learner. For example, the Bayes error rate of
a given feature space, if known, can be used to aid in choosing a classifier,
as well as in feature selection and model selection for the base classifiers
and the meta classifier. Recent work in the field of f-divergence functional
estimation has led to the development of simple and rapidly converging
estimators that can be used to estimate various bounds on the Bayes error. We
estimate multiple bounds on the Bayes error using an estimator that applies
meta learning to slowly converging plug-in estimators to obtain the parametric
convergence rate. We compare the estimated bounds empirically on simulated data
and then estimate the tighter bounds on features extracted from an image patch
analysis of sunspot continuum and magnetogram images.Comment: 6 pages, 3 figures, to appear in proceedings of 2015 IEEE Signal
Processing and SP Education Worksho
Structured variable selection in support vector machines
When applying the support vector machine (SVM) to high-dimensional
classification problems, we often impose a sparse structure in the SVM to
eliminate the influences of the irrelevant predictors. The lasso and other
variable selection techniques have been successfully used in the SVM to perform
automatic variable selection. In some problems, there is a natural hierarchical
structure among the variables. Thus, in order to have an interpretable SVM
classifier, it is important to respect the heredity principle when enforcing
the sparsity in the SVM. Many variable selection methods, however, do not
respect the heredity principle. In this paper we enforce both sparsity and the
heredity principle in the SVM by using the so-called structured variable
selection (SVS) framework originally proposed in Yuan, Joseph and Zou (2007).
We minimize the empirical hinge loss under a set of linear inequality
constraints and a lasso-type penalty. The solution always obeys the desired
heredity principle and enjoys sparsity. The new SVM classifier can be
efficiently fitted, because the optimization problem is a linear program.
Another contribution of this work is to present a nonparametric extension of
the SVS framework, and we propose nonparametric heredity SVMs. Simulated and
real data are used to illustrate the merits of the proposed method.Comment: Published in at http://dx.doi.org/10.1214/07-EJS125 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future
Regularization and Bayesian methods for system identification have been
repopularized in the recent years, and proved to be competitive w.r.t.
classical parametric approaches. In this paper we shall make an attempt to
illustrate how the use of regularization in system identification has evolved
over the years, starting from the early contributions both in the Automatic
Control as well as Econometrics and Statistics literature. In particular we
shall discuss some fundamental issues such as compound estimation problems and
exchangeability which play and important role in regularization and Bayesian
approaches, as also illustrated in early publications in Statistics. The
historical and foundational issues will be given more emphasis (and space), at
the expense of the more recent developments which are only briefly discussed.
The main reason for such a choice is that, while the recent literature is
readily available, and surveys have already been published on the subject, in
the author's opinion a clear link with past work had not been completely
clarified.Comment: Plenary Presentation at the IFAC SYSID 2015. Submitted to Annual
Reviews in Contro
Comment on "Support Vector Machines with Applications"
Comment on "Support Vector Machines with Applications" [math.ST/0612817]Comment: Published at http://dx.doi.org/10.1214/088342306000000475 in the
Statistical Science (http://www.imstat.org/sts/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …