285 research outputs found

    Multiscale 3D Shape Analysis using Spherical Wavelets

    Get PDF
    ©2005 Springer. The original publication is available at www.springerlink.com: http://dx.doi.org/10.1007/11566489_57DOI: 10.1007/11566489_57Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data

    Ensemble of Hankel Matrices for Face Emotion Recognition

    Full text link
    In this paper, a face emotion is considered as the result of the composition of multiple concurrent signals, each corresponding to the movements of a specific facial muscle. These concurrent signals are represented by means of a set of multi-scale appearance features that might be correlated with one or more concurrent signals. The extraction of these appearance features from a sequence of face images yields to a set of time series. This paper proposes to use the dynamics regulating each appearance feature time series to recognize among different face emotions. To this purpose, an ensemble of Hankel matrices corresponding to the extracted time series is used for emotion classification within a framework that combines nearest neighbor and a majority vote schema. Experimental results on a public available dataset shows that the adopted representation is promising and yields state-of-the-art accuracy in emotion classification.Comment: Paper to appear in Proc. of ICIAP 2015. arXiv admin note: text overlap with arXiv:1506.0500

    Fitting a 3D Morphable Model to Edges: A Comparison Between Hard and Soft Correspondences

    Get PDF
    We propose a fully automatic method for fitting a 3D morphable model to single face images in arbitrary pose and lighting. Our approach relies on geometric features (edges and landmarks) and, inspired by the iterated closest point algorithm, is based on computing hard correspondences between model vertices and edge pixels. We demonstrate that this is superior to previous work that uses soft correspondences to form an edge-derived cost surface that is minimised by nonlinear optimisation.Comment: To appear in ACCV 2016 Workshop on Facial Informatic

    Statistical Model of Shape Moments with Active Contour Evolution for Shape Detection and Segmentation

    Get PDF
    This paper describes a novel method for shape representation and robust image segmentation. The proposed method combines two well known methodologies, namely, statistical shape models and active contours implemented in level set framework. The shape detection is achieved by maximizing a posterior function that consists of a prior shape probability model and image likelihood function conditioned on shapes. The statistical shape model is built as a result of a learning process based on nonparametric probability estimation in a PCA reduced feature space formed by the Legendre moments of training silhouette images. A greedy strategy is applied to optimize the proposed cost function by iteratively evolving an implicit active contour in the image space and subsequent constrained optimization of the evolved shape in the reduced shape feature space. Experimental results presented in the paper demonstrate that the proposed method, contrary to many other active contour segmentation methods, is highly resilient to severe random and structural noise that could be present in the data

    A new computational solution to compute the uptake index from 99mTc-MDP bone scintigraphy images

    Get PDF
    The appearance of bone metastasis in patients with breast or prostate cancer makes the skeleton most affected by metastatic cancer. It is estimated that these two cancers lead in 80% of the cases to the appearance of bone metastasis, which is considered the main cause of death. 99mTc-methylene diphosphonate (99mTc-MDP) bone scintigraphy is the most commonly used radionuclide imaging technique for the detection and prognosis of bone carcinoma. With this work, it was intended to develop a new computational solution to extract from 99mTc-MDP bone scintigraphy images quantitative measurements of the affected regions in relation to the non-pathological regions. Hence, the uptake indexes computed from a new imaging exam are compared with the indexes computed from a previous exam of the same patient. Using active shape models, it is possible to segment the regions of the skeleton more prone to be affected by the bone carcinoma. On the other hand, the metastasis is segmented using the region-growing algorithm. Then, the uptake rate is calculated from the relation between the maximum intensity pixel of the metastatic region in relation to the maximum intensity pixel of the skeletal region where the metastasis was located. We evaluated the developed solution using scintigraphic images of 15 patients (7 females and 8 males) with bone carcinoma in two distinct time exams. The bone scans were obtained approximately 3 h after the injection of 740 MBq of 99mTc-MDP. The obtained indexes were compared against the evaluations in the clinical reports of the patients. It was possible to verify that the indexes obtained are according to the clinical evaluations of the 30 exams analyzed. However, there were 2 cases where the clinical evaluation was unclear as to the progression or regression of the disease, and when comparing the indexes, it is suggested the progression of the disease in one case and the regression in the other one. Based on the obtained results, it is possible to conclude that the computed indexes allow a quantitative analysis to evaluate the response to the prescribed therapy. Thus, the developed solution is promising to be used as a tool to help the technicians at the time of clinical evaluation

    Recognising facial expressions in video sequences

    Full text link
    We introduce a system that processes a sequence of images of a front-facing human face and recognises a set of facial expressions. We use an efficient appearance-based face tracker to locate the face in the image sequence and estimate the deformation of its non-rigid components. The tracker works in real-time. It is robust to strong illumination changes and factors out changes in appearance caused by illumination from changes due to face deformation. We adopt a model-based approach for facial expression recognition. In our model, an image of a face is represented by a point in a deformation space. The variability of the classes of images associated to facial expressions are represented by a set of samples which model a low-dimensional manifold in the space of deformations. We introduce a probabilistic procedure based on a nearest-neighbour approach to combine the information provided by the incoming image sequence with the prior information stored in the expression manifold in order to compute a posterior probability associated to a facial expression. In the experiments conducted we show that this system is able to work in an unconstrained environment with strong changes in illumination and face location. It achieves an 89\% recognition rate in a set of 333 sequences from the Cohn-Kanade data base
    corecore