322 research outputs found

    What’s in a Smile? Initial results of multilevel principal components analysis of facial shape and image texture

    Get PDF
    Multilevel principal components analysis (mPCA) has previously been shown to provide a simple and straightforward method of forming point distribution models that can be used in (active) shape models. Here we extend the mPCA approach to model image texture as well as shape. As a test case, we consider a set of (2D frontal) facial images from a group of 80 Finnish subjects (34 male; 46 female) with two different facial expressions (smiling and neutral) per subject. Shape (in terms of landmark points) and image texture are considered separately in this initial analysis. Three-level models are constructed that contain levels for biological sex, “within-subject” variation (i.e., facial expression), and “between-subject” variation (i.e., all other sources of variation). By considering eigenvalues, we find that the order of importance as sources of variation for facial shape is: facial expression (47.5%), between-subject variations (45.1%), and then biological sex (7.4%). By contrast, the order for image texture is: between-subject variations (55.5%), facial expression (37.1%), and then biological sex (7.4%). The major modes for the facial expression level of the mPCA models clearly reflect changes in increased mouth size and increased prominence of cheeks during smiling for both shape and texture. Even subtle effects such as changes to eyes and nose shape during smile are seen clearly. The major mode for the biological sex level of the mPCA models similarly relates clearly to changes between male and female. Model fits yield “scores” for each principal component that show strong clustering for both shape and texture by biological sex and facial expression at appropriate levels of the model. We conclude that mPCA correctly decomposes sources of variation due to biological sex and facial expression (etc.) and that it provides a reliable method of forming models of both shape and image texture

    Deep Adaptive Attention for Joint Facial Action Unit Detection and Face Alignment

    Full text link
    Facial action unit (AU) detection and face alignment are two highly correlated tasks since facial landmarks can provide precise AU locations to facilitate the extraction of meaningful local features for AU detection. Most existing AU detection works often treat face alignment as a preprocessing and handle the two tasks independently. In this paper, we propose a novel end-to-end deep learning framework for joint AU detection and face alignment, which has not been explored before. In particular, multi-scale shared features are learned firstly, and high-level features of face alignment are fed into AU detection. Moreover, to extract precise local features, we propose an adaptive attention learning module to refine the attention map of each AU adaptively. Finally, the assembled local features are integrated with face alignment features and global features for AU detection. Experiments on BP4D and DISFA benchmarks demonstrate that our framework significantly outperforms the state-of-the-art methods for AU detection.Comment: This paper has been accepted by ECCV 201

    WÄ€HINE MÄ€ORI Keeping safe in unsafe relationships

    Full text link

    A 3D Face Modelling Approach for Pose-Invariant Face Recognition in a Human-Robot Environment

    Full text link
    Face analysis techniques have become a crucial component of human-machine interaction in the fields of assistive and humanoid robotics. However, the variations in head-pose that arise naturally in these environments are still a great challenge. In this paper, we present a real-time capable 3D face modelling framework for 2D in-the-wild images that is applicable for robotics. The fitting of the 3D Morphable Model is based exclusively on automatically detected landmarks. After fitting, the face can be corrected in pose and transformed back to a frontal 2D representation that is more suitable for face recognition. We conduct face recognition experiments with non-frontal images from the MUCT database and uncontrolled, in the wild images from the PaSC database, the most challenging face recognition database to date, showing an improved performance. Finally, we present our SCITOS G5 robot system, which incorporates our framework as a means of image pre-processing for face analysis

    Endoscopic navigation in the absence of CT imaging

    Full text link
    Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference image to provide structural context to the clinician. In this paper, we present a system for navigation during clinical endoscopic exploration in the absence of computed tomography (CT) scans by making use of shape statistics from past CT scans. Using a deformable registration algorithm along with dense reconstructions from video, we show that we are able to achieve submillimeter registrations in in-vivo clinical data and are able to assign confidence to these registrations using confidence criteria established using simulated data.Comment: 8 pages, 3 figures, MICCAI 201

    Semantic Context Forests for Learning-Based Knee Cartilage Segmentation in 3D MR Images

    Full text link
    The automatic segmentation of human knee cartilage from 3D MR images is a useful yet challenging task due to the thin sheet structure of the cartilage with diffuse boundaries and inhomogeneous intensities. In this paper, we present an iterative multi-class learning method to segment the femoral, tibial and patellar cartilage simultaneously, which effectively exploits the spatial contextual constraints between bone and cartilage, and also between different cartilages. First, based on the fact that the cartilage grows in only certain area of the corresponding bone surface, we extract the distance features of not only to the surface of the bone, but more informatively, to the densely registered anatomical landmarks on the bone surface. Second, we introduce a set of iterative discriminative classifiers that at each iteration, probability comparison features are constructed from the class confidence maps derived by previously learned classifiers. These features automatically embed the semantic context information between different cartilages of interest. Validated on a total of 176 volumes from the Osteoarthritis Initiative (OAI) dataset, the proposed approach demonstrates high robustness and accuracy of segmentation in comparison with existing state-of-the-art MR cartilage segmentation methods.Comment: MICCAI 2013: Workshop on Medical Computer Visio

    Estimation of Cell Cycle States of Human Melanoma Cells with Quantitative Phase Imaging and Deep Learning

    Get PDF
    Visualization and classification of cell cycle stages in live cells requires the introduction of transient or stably expressing fluorescent markers. This is not feasible for all cell types, and can be time consuming to implement. Labelling of living cells also has the potential to perturb normal cellular function. Here we describe a computational strategy to estimate core cell cycle stages without markers by taking advantage of features extracted from information-rich ptychographic time-lapse movies. We show that a deep-learning approach can estimate the cell cycle trajectories of individual human melanoma cells from short 3-frame (~23 minute) snapshots, and can identify cell cycle arrest induced by chemotherapeutic agents targeting melanoma driver mutations

    Towards Pose-Invariant 2D Face Classification for Surveillance

    Get PDF
    A key problem for "face in the crowd" recognition from existing surveillance cameras in public spaces (such as mass transit centres) is the issue of pose mismatches between probe and gallery faces. In addition to accuracy, scalability is also important, necessarily limiting the complexity of face classification algorithms. In this paper we evaluate recent approaches to the recognition of faces at relatively large pose angles from a gallery of frontal images and propose novel adaptations as well as modifications. Specifically, we compare and contrast the accuracy, robustness and speed of an Active Appearance Model (AAM) based method (where realistic frontal faces are synthesized from non-frontal probe faces) against bag-of-features methods (which are local feature approaches based on block Discrete Cosine Transforms and Gaussian Mixture Models). We show a novel approach where the AAM based technique is sped up by directly obtaining pose-robust features, allowing the omission of the computationally expensive and artefact producing image synthesis step. Additionally, we adapt a histogram-based bag-of-features technique to face classification and contrast its properties to a previously proposed direct bag-of-features method. We also show that the two bag-of-features approaches can be considerably sped up, without a loss in classification accuracy, via an approximation of the exponential function. Experiments on the FERET and PIE databases suggest that the bag-of-features techniques generally attain better performance, with significantly lower computational loads. The histogram-based bag-of-features technique is capable of achieving an average recognition accuracy of 89% for pose angles of around 25 degrees
    • …
    corecore