41 research outputs found
Theoretical Properties of Projection Based Multilayer Perceptrons with Functional Inputs
Many real world data are sampled functions. As shown by Functional Data
Analysis (FDA) methods, spectra, time series, images, gesture recognition data,
etc. can be processed more efficiently if their functional nature is taken into
account during the data analysis process. This is done by extending standard
data analysis methods so that they can apply to functional inputs. A general
way to achieve this goal is to compute projections of the functional data onto
a finite dimensional sub-space of the functional space. The coordinates of the
data on a basis of this sub-space provide standard vector representations of
the functions. The obtained vectors can be processed by any standard method. In
our previous work, this general approach has been used to define projection
based Multilayer Perceptrons (MLPs) with functional inputs. We study in this
paper important theoretical properties of the proposed model. We show in
particular that MLPs with functional inputs are universal approximators: they
can approximate to arbitrary accuracy any continuous mapping from a compact
sub-space of a functional space to R. Moreover, we provide a consistency result
that shows that any mapping from a functional space to R can be learned thanks
to examples by a projection based MLP: the generalization mean square error of
the MLP decreases to the smallest possible mean square error on the data when
the number of examples goes to infinity
Consistency of Derivative Based Functional Classifiers on Sampled Data
International audienceIn some applications, especially spectrometric ones, curve classifiers achieve better performances if they work on the -order derivatives of their inputs. This paper proposes a smoothing spline based approach that give a strong theoretical background to this common practice
Support vector machine for functional data classification
In many applications, input data are sampled functions taking their values in
infinite dimensional spaces rather than standard vectors. This fact has complex
consequences on data analysis algorithms that motivate modifications of them.
In fact most of the traditional data analysis tools for regression,
classification and clustering have been adapted to functional inputs under the
general name of functional Data Analysis (FDA). In this paper, we investigate
the use of Support Vector Machines (SVMs) for functional data analysis and we
focus on the problem of curves discrimination. SVMs are large margin classifier
tools based on implicit non linear mappings of the considered data into high
dimensional spaces thanks to kernels. We show how to define simple kernels that
take into account the unctional nature of the data and lead to consistent
classification. Experiments conducted on real world data emphasize the benefit
of taking into account some functional aspects of the problems.Comment: 13 page
Un r\'esultat de consistance pour des SVM fonctionnels par interpolation spline
This Note proposes a new methodology for function classification with Support
Vector Machine (SVM). Rather than relying on projection on a truncated Hilbert
basis as in our previous work, we use an implicit spline interpolation that
allows us to compute SVM on the derivatives of the studied functions. To that
end, we propose a kernel defined directly on the discretizations of the
observed functions. We show that this method is universally consistent.Comment: 6 page
Optimal Bayes Classifiers for Functional Data and Density Ratios
Bayes classifiers for functional data pose a challenge. This is because
probability density functions do not exist for functional data. As a
consequence, the classical Bayes classifier using density quotients needs to be
modified. We propose to use density ratios of projections on a sequence of
eigenfunctions that are common to the groups to be classified. The density
ratios can then be factored into density ratios of individual functional
principal components whence the classification problem is reduced to a sequence
of nonparametric one-dimensional density estimates. This is an extension to
functional data of some of the very earliest nonparametric Bayes classifiers
that were based on simple density ratios in the one-dimensional case. By means
of the factorization of the density quotients the curse of dimensionality that
would otherwise severely affect Bayes classifiers for functional data can be
avoided. We demonstrate that in the case of Gaussian functional data, the
proposed functional Bayes classifier reduces to a functional version of the
classical quadratic discriminant. A study of the asymptotic behavior of the
proposed classifiers in the large sample limit shows that under certain
conditions the misclassification rate converges to zero, a phenomenon that has
been referred to as "perfect classification". The proposed classifiers also
perform favorably in finite sample applications, as we demonstrate in
comparisons with other functional classifiers in simulations and various data
applications, including wine spectral data, functional magnetic resonance
imaging (fMRI) data for attention deficit hyperactivity disorder (ADHD)
patients, and yeast gene expression data