3,259 research outputs found
Relaxed 2-D Principal Component Analysis by Norm for Face Recognition
A relaxed two dimensional principal component analysis (R2DPCA) approach is
proposed for face recognition. Different to the 2DPCA, 2DPCA- and G2DPCA,
the R2DPCA utilizes the label information (if known) of training samples to
calculate a relaxation vector and presents a weight to each subset of training
data. A new relaxed scatter matrix is defined and the computed projection axes
are able to increase the accuracy of face recognition. The optimal -norms
are selected in a reasonable range. Numerical experiments on practical face
databased indicate that the R2DPCA has high generalization ability and can
achieve a higher recognition rate than state-of-the-art methods.Comment: 19 pages, 11 figure
On a vector-valued Hopf-Dunford-Schwartz lemma
In this paper, we state as a conjecture a vector-valued Hopf-Dunford-Schwartz
lemma and give a partial answer to it. As an application of this powerful
result, we prove some Fe fferman-Stein inequalities in the setting of Dunkl
analysis where the classical tools of real analysis cannot be applied
Theoretical Properties of Projection Based Multilayer Perceptrons with Functional Inputs
Many real world data are sampled functions. As shown by Functional Data
Analysis (FDA) methods, spectra, time series, images, gesture recognition data,
etc. can be processed more efficiently if their functional nature is taken into
account during the data analysis process. This is done by extending standard
data analysis methods so that they can apply to functional inputs. A general
way to achieve this goal is to compute projections of the functional data onto
a finite dimensional sub-space of the functional space. The coordinates of the
data on a basis of this sub-space provide standard vector representations of
the functions. The obtained vectors can be processed by any standard method. In
our previous work, this general approach has been used to define projection
based Multilayer Perceptrons (MLPs) with functional inputs. We study in this
paper important theoretical properties of the proposed model. We show in
particular that MLPs with functional inputs are universal approximators: they
can approximate to arbitrary accuracy any continuous mapping from a compact
sub-space of a functional space to R. Moreover, we provide a consistency result
that shows that any mapping from a functional space to R can be learned thanks
to examples by a projection based MLP: the generalization mean square error of
the MLP decreases to the smallest possible mean square error on the data when
the number of examples goes to infinity
- …