41 research outputs found
Analyse locale de la forme 3D pour la reconnaissance d'expressions faciales
National audienceIn this paper we propose a novel approach for indentityindependent 3D facial expression recognition. Our approach is based on shape analysis of local patches extracted from 3D facial shape model. A Riemannian framework is applied to compute geodesic distances between correspondent patches belonging to different faces of the BU-3DFE database and conveying different expressions. Quantitative measures of similarity are obtained and then used as inputs to several classification methods. Using Multiboosting and Support Vector Machines (SVM) classifiers, we achieved average recognition rates respectively equal to 98.81% and 97.75%.Dans cet article, nous proposons une nouvelle approche pour la reconnaissance d'expressions faciales 3D invariante par rapport à l'identité. Cette approche est basée sur l'analyse de formes de " patches "locaux extraits à partir de modèles de visages 3D. Un cadre Riemannien est utilisé pour le calcul de distances géodésiques entre les patches correspondants appartenant a des visages différents sous différentes expressions. Des mesures quantitatives de similarité sont alors obtenues et sont utilisées comme des paramètres d'entrée pour des algorithmes de classification multiclasses. En utilisant des techniques de Multiboosting et de Machines à Vecteurs de Support (SVM), les taux de reconnaissance des six expressions de base obtenus sur la base BU-3DFE sont respectivement 98.81% et 97.75%
Semantically Informed Multiview Surface Refinement
We present a method to jointly refine the geometry and semantic segmentation
of 3D surface meshes. Our method alternates between updating the shape and the
semantic labels. In the geometry refinement step, the mesh is deformed with
variational energy minimization, such that it simultaneously maximizes
photo-consistency and the compatibility of the semantic segmentations across a
set of calibrated images. Label-specific shape priors account for interactions
between the geometry and the semantic labels in 3D. In the semantic
segmentation step, the labels on the mesh are updated with MRF inference, such
that they are compatible with the semantic segmentations in the input images.
Also, this step includes prior assumptions about the surface shape of different
semantic classes. The priors induce a tight coupling, where semantic
information influences the shape update and vice versa. Specifically, we
introduce priors that favor (i) adaptive smoothing, depending on the class
label; (ii) straightness of class boundaries; and (iii) semantic labels that
are consistent with the surface orientation. The novel mesh-based
reconstruction is evaluated in a series of experiments with real and synthetic
data. We compare both to state-of-the-art, voxel-based semantic 3D
reconstruction, and to purely geometric mesh refinement, and demonstrate that
the proposed scheme yields improved 3D geometry as well as an improved semantic
segmentation
Investigating Randomised Sphere Covers in Supervised Learning
c©This copy of the thesis has been supplied on condition that anyone who consults it is understood to recognise that its copyright rests with the author and that no quotation from the thesis, nor any information derived therefrom, may be published without the author’s prior, written consent. In this thesis, we thoroughly investigate a simple Instance Based Learning (IBL) classifier known as Sphere Cover. We propose a simple Randomized Sphere Cover Classifier (αRSC) and use several datasets in order to evaluate the classification performance of the αRSC classifier. In addition, we analyse the generalization error of the proposed classifier using bias/variance decomposition. A Sphere Cover Classifier may be described from the compression scheme which stipulates data compression as the reason for high generalization performance. We investigate the compression capacity of αRSC using a sample compression bound. The Compression Scheme prompted us to search new compressibility methods for αRSC. As such, we used a Gaussian kernel to investigate further data compression
Analyse locale de la forme 3D pour la reconnaissance d'expressions faciales
National audienceIn this paper we propose a novel approach for indentityindependent 3D facial expression recognition. Our approach is based on shape analysis of local patches extracted from 3D facial shape model. A Riemannian framework is applied to compute geodesic distances between correspondent patches belonging to different faces of the BU-3DFE database and conveying different expressions. Quantitative measures of similarity are obtained and then used as inputs to several classification methods. Using Multiboosting and Support Vector Machines (SVM) classifiers, we achieved average recognition rates respectively equal to 98.81% and 97.75%.Dans cet article, nous proposons une nouvelle approche pour la reconnaissance d'expressions faciales 3D invariante par rapport à l'identité. Cette approche est basée sur l'analyse de formes de " patches "locaux extraits à partir de modèles de visages 3D. Un cadre Riemannien est utilisé pour le calcul de distances géodésiques entre les patches correspondants appartenant a des visages différents sous différentes expressions. Des mesures quantitatives de similarité sont alors obtenues et sont utilisées comme des paramètres d'entrée pour des algorithmes de classification multiclasses. En utilisant des techniques de Multiboosting et de Machines à Vecteurs de Support (SVM), les taux de reconnaissance des six expressions de base obtenus sur la base BU-3DFE sont respectivement 98.81% et 97.75%
Semantic 3D Reconstruction with Finite Element Bases
We propose a novel framework for the discretisation of multi-label problems
on arbitrary, continuous domains. Our work bridges the gap between general FEM
discretisations, and labeling problems that arise in a variety of computer
vision tasks, including for instance those derived from the generalised Potts
model. Starting from the popular formulation of labeling as a convex relaxation
by functional lifting, we show that FEM discretisation is valid for the most
general case, where the regulariser is anisotropic and non-metric. While our
findings are generic and applicable to different vision problems, we
demonstrate their practical implementation in the context of semantic 3D
reconstruction, where such regularisers have proved particularly beneficial.
The proposed FEM approach leads to a smaller memory footprint as well as faster
computation, and it constitutes a very simple way to enable variable, adaptive
resolution within the same model