15 research outputs found

    Fast Approximation of Distance Between Elastic Curves using Kernels

    Get PDF
    Abstract Elastic shape analysis on non-linear Riemannian manifolds provides an efficient and elegant way for simultaneous comparison and registration of non-rigid shapes. In such formulation, shapes become points on some high dimensional shape space. A geodesic between two points corresponds to the optimal deformation needed to register one shape onto another. The length of the geodesic provides a proper metric for shape comparison. However, the computation of geodesics, and therefore the metric, is computationally very expensive as it involves a search over the space of all possible rotations and reparameterizations. This problem is even more important in shape retrieval scenarios where the query shape is compared to every element in the collection to search. In this paper, we propose a new procedure for metric approximation using the framework of kernel functions. We will demonstrate that this provides a fast approximation of the metric while preserving its invariance properties

    Extremal Human Curves: a New Human Body Shape and Pose Descriptor

    Get PDF
    Shape and pose similarityInternational audienceAutomatic estimation of 3D shape similarity from video is a very important factor for human action analysis, but also a challenging task due to variations in body topology and the high dimensionality of the pose configuration space.We consider the problem of 3D shape similarity in 3D video sequence for different actors and motions. Most current approaches use conventional global features as a shape descriptor and define the shape similarity using L2 distance. However, such methods are limited to coarse representation and do not sufficiently reflect the pose similarity of human perception. In this paper, we present a novel 3D human pose descriptor called Extremal Human Curves (EHC), extracted from both the spatial and the topological dimensions of body surface. To compare tow shapes, we use an elastic metric in Shape Space between their descriptors, based on static features, and then perform temporal convolutions, thereby capturing the pose information encoded in multiple adjacent frames. We quantitatively analyze the effectiveness of our descriptors for both 3D shape similarity in video and content-based pose retrieval for static shape, and show that each one can contribute, sometimes substantially, to more reliable human shape and pose analysis. Experimental results are promising and show the robustness and accuracy of the proposed approach by comparing the recognition performance against several stateof- the-art methods

    3D Face Recognition under Expressions, Occlusions, and Pose Variations

    Full text link

    Elastic Shape Models for Face Analysis Using Curvilinear Coordinates

    Get PDF
    International audienceThis paper studies the problem of analyzing variability in shapes of facial surfaces using a Rie- mannian framework, a fundamental approach that allows for joint matchings, comparisons, and deformations of faces under a chosen metric. The starting point is to impose a curvilinear coordinate system, named the Darcyan coordinate system, on facial surfaces; it is based on the level curves of the surface distance function measured from the tip of the nose. Each facial surface is now represented as an indexed collection of these level curves. The task of finding optimal deformations, or geodesic paths, between facial surfaces reduces to that of finding geodesics between level curves, which is accomplished using the theory of elastic shape analy- sis of 3D curves. Elastic framework allows for nonlinear matching between curves and between points across curves. The resulting geodesics provide optimal elastic deformations between faces and an elastic metric for comparing facial shapes. We demonstrate this idea using examples from FSU face databas

    Deformation Based 3D Facial Expression Representation

    Get PDF
    We propose a deformation based representation for analyzing expressions from 3D faces. A point cloud of a 3D face is decomposed into an ordered deformable set of curves that start from a fixed point. Subsequently, a mapping function is defined to identify the set of curves with an element of a high dimensional matrix Lie group, specifically the direct product of SE(3). Representing 3D faces as an element of a high dimensional Lie group has two main advantages. First, using the group structure, facial expressions can be decoupled from a neutral face. Second, an underlying non-linear facial expression manifold can be captured with the Lie group and mapped to a linear space, Lie algebra of the group. This opens up the possibility of classifying facial expressions with linear models without compromising the underlying manifold. Alternatively, linear combinations of linearised facial expressions can be mapped back from the Lie algebra to the Lie group. The approach is tested on the BU-3DFE and the Bosphorus datasets. The results show that the proposed approach performed comparably, on the BU-3DFE dataset, without using features or extensive landmark points

    3D facial expression recognition using SIFT descriptors of automatically detected keypoints

    Get PDF
    International audienceMethods to recognize humans' facial expressions have been proposed mainly focusing on 2D still images and videos. In this paper, the problem of person-independent facial expression recognition is addressed using the 3D geometry information extracted from the 3D shape of the face. To this end, a completely automatic approach is proposed that relies on identifying a set of facial keypoints, computing SIFT feature descriptors of depth images of the face around sample points defined starting from the facial keypoints, and selecting the subset of features with maximum relevance. Training a Support Vector Machine (SVM) for each facial expression to be recognized, and combining them to form. a multi-class classifier, an average recognition rate of 78.43% on the BU-3DFE database has been obtained. Comparison with competitor approaches using a common experimental setting on the BU-3DFE database shows that our solution is capable of obtaining state of the art results. The same 3D face representation framework and testing database have been also used to perform. 3D facial expression retrieval (i.e., retrieve 3D scans with the same facial expression as shown by a target subject), with results proving the viability of the proposed solution

    3D FACE RECOGNITION USING LOCAL FEATURE BASED METHODS

    Get PDF
    Face recognition has attracted many researchers’ attention compared to other biometrics due to its non-intrusive and friendly nature. Although several methods for 2D face recognition have been proposed so far, there are still some challenges related to the 2D face including illumination, pose variation, and facial expression. In the last few decades, 3D face research area has become more interesting since shape and geometry information are used to handle challenges from 2D faces. Existing algorithms for face recognition are divided into three different categories: holistic feature-based, local feature-based, and hybrid methods. According to the literature, local features have shown better performance relative to holistic feature-based methods under expression and occlusion challenges. In this dissertation, local feature-based methods for 3D face recognition have been studied and surveyed. In the survey, local methods are classified into three broad categories which consist of keypoint-based, curve-based, and local surface-based methods. Inspired by keypoint-based methods which are effective to handle partial occlusion, structural context descriptor on pyramidal shape maps and texture image has been proposed in a multimodal scheme. Score-level fusion is used to combine keypoints’ matching score in both texture and shape modalities. The survey shows local surface-based methods are efficient to handle facial expression. Accordingly, a local derivative pattern is introduced to extract distinct features from depth map in this work. In addition, the local derivative pattern is applied on surface normals. Most 3D face recognition algorithms are focused to utilize the depth information to detect and extract features. Compared to depth maps, surface normals of each point can determine the facial surface orientation, which provides an efficient facial surface representation to extract distinct features for recognition task. An Extreme Learning Machine (ELM)-based auto-encoder is used to make the feature space more discriminative. Expression and occlusion robust analysis using the information from the normal maps are investigated by dividing the facial region into patches. A novel hybrid classifier is proposed to combine Sparse Representation Classifier (SRC) and ELM classifier in a weighted scheme. The proposed algorithms have been evaluated on four widely used 3D face databases; FRGC, Bosphorus, Bu-3DFE, and 3D-TEC. The experimental results illustrate the effectiveness of the proposed approaches. The main contribution of this work lies in identification and analysis of effective local features and a classification method for improving 3D face recognition performance
    corecore