7 research outputs found

    Skeleton-based canonical forms for non-rigid 3D shape retrieval

    Get PDF
    The retrieval of non-rigid 3D shapes is an important task. A common technique is to simplify this problem to a rigid shape retrieval task by producing a bending invariant canonical form for each shape in the dataset to be searched. It is common for these techniques to attempt to ``unbend'' a shape by applying multidimensional scaling to the distances between points on the mesh, but this leads to unwanted local shape distortions. We instead perform the unbending on the skeleton of the mesh, and use this to drive the deformation of the mesh itself. This leads to a computational speed-up and less distortions of the local details of the shape. We compare our method against other canonical forms and our experiments show that our method achieves state-of-the-art retrieval accuracy in a recent canonical forms benchmark, and only a small drop in retrieval accuracy over state-of-the-art in a second recent benchmark, while being significantly faster

    Novel Correspondence-based Approach for Consistent Human Skeleton Extraction

    Get PDF
    This paper presents a novel base-points-driven shape correspondence (BSC) approach to extract skeletons of articulated objects from 3D mesh shapes. The skeleton extraction based on BSC approach is more accurate than the traditional direct skeleton extraction methods. Since 3D shapes provide more geometric information, BSC offers the consistent information between the source shape and the target shapes. In this paper, we first extract the skeleton from a template shape such as the source shape automatically. Then, the skeletons of the target shapes of different poses are generated based on the correspondence relationship with source shape. The accuracy of the proposed method is demonstrated by presenting a comprehensive performance evaluation on multiple benchmark datasets. The results of the proposed approach can be applied to various applications such as skeleton-driven animation, shape segmentation and human motion analysis

    Computational Modeling of Facial Response for Detecting Differential Traits in Autism Spectrum Disorders

    Get PDF
    This dissertation proposes novel computational modeling and computer vision methods for the analysis and discovery of differential traits in subjects with Autism Spectrum Disorders (ASD) using video and three-dimensional (3D) images of face and facial expressions. ASD is a neurodevelopmental disorder that impairs an individual’s nonverbal communication skills. This work studies ASD from the pathophysiology of facial expressions which may manifest atypical responses in the face. State-of-the-art psychophysical studies mostly employ na¨ıve human raters to visually score atypical facial responses of individuals with ASD, which may be subjective, tedious, and error prone. A few quantitative studies use intrusive sensors on the face of the subjects with ASD, which in turn, may inhibit or bias the natural facial responses of these subjects. This dissertation proposes non-intrusive computer vision methods to alleviate these limitations in the investigation for differential traits from the spontaneous facial responses of individuals with ASD. Two IRB-approved psychophysical studies are performed involving two groups of age-matched subjects: one for subjects diagnosed with ASD and the other for subjects who are typically-developing (TD). The facial responses of the subjects are computed from their facial images using the proposed computational models and then statistically analyzed to infer about the differential traits for the group with ASD. A novel computational model is proposed to represent the large volume of 3D facial data in a small pose-invariant Frenet frame-based feature space. The inherent pose-invariant property of the proposed features alleviates the need for an expensive 3D face registration in the pre-processing step. The proposed modeling framework is not only computationally efficient but also offers competitive performance in 3D face and facial expression recognition tasks when compared with that of the state-ofthe-art methods. This computational model is applied in the first experiment to quantify subtle facial muscle response from the geometry of 3D facial data. Results show a statistically significant asymmetry in specific pair of facial muscle activation (p\u3c0.05) for the group with ASD, which suggests the presence of a psychophysical trait (also known as an ’oddity’) in the facial expressions. For the first time in the ASD literature, the facial action coding system (FACS) is employed to classify the spontaneous facial responses based on facial action units (FAUs). Statistical analyses reveal significantly (p\u3c0.01) higher prevalence of smile expression (FAU 12) for the ASD group when compared with the TD group. The high prevalence of smile has co-occurred with significantly averted gaze (p\u3c0.05) in the group with ASD, which is indicative of an impaired reciprocal communication. The metric associated with incongruent facial and visual responses suggests a behavioral biomarker for ASD. The second experiment shows a higher prevalence of mouth frown (FAU 15) and significantly lower correlations between the activation of several FAU pairs (p\u3c0.05) in the group with ASD when compared with the TD group. The proposed computational modeling in this dissertation offers promising biomarkers, which may aid in early detection of subtle ASD-related traits, and thus enable an effective intervention strategy in the future

    Isometric deformation invariant 3D shape recognition

    No full text
    Intra-shape deformations complicate 3D shape recognition and therefore need proper modeling. Thereto, an isometric deformation model is used in this paper. The method proposed does not need explicit point correspondences for the comparison of 3D shapes. The geodesic distance matrix is used as an isometry-invariant shape representation. Two approaches are described to arrive at a sampling order invariant shape descriptor: the histogram of geodesic distance matrix values and the set of largest singular values of the geodesic distance matrix. Shape comparison is performed by comparison of the shape descriptors using the χ2-distance as dissimilarity measure. For object recognition, the results obtained demonstrate the singular value approach to outperform the histogram-based approach, as well as the state-of-the-art multidimensional scaling technique, the ICP baseline algorithm and other isometric deformation modeling methods found in literature. Using the TOSCA database, a rank-1 recognition rate of 100% is obtained for the identification scenario, while the verification experiments are characterized by a 1.58% equal error rate. External validation demonstrates that the singular value approach outperforms all other participants for the non-rigid object retrieval contests in SHREC 2010 as well as SHREC 2011. For 3D face recognition, the rank-1 recognition rate is 61.9% and the equal error rate is 11.8% on the BU-3DFE database. This decreased performance is attributed to the fact that the isometric deformation assumption only holds to a limited extent for facial expressions. This is also demonstrated in this paper. © 2012 Elsevier Ltd. All rights reserved.Smeets D., Hermans J., Vandermeulen D., Suetens P., ''Isometric deformation invariant 3D shape recognition'', Pattern recognition, vol. 45, no. 7, pp. 2817-2831, July 2012.status: publishe

    Actas de las XXXIV Jornadas de Automática

    Get PDF
    Postprint (published version
    corecore