36 research outputs found
Towards automatic landmarking on 2.5D face range images
In this paper, we propose an algorithm to
automatically landmark points on 2.5-dimensional (2.5D) face
images. We applied the Scale-invariant Feature Transform
(SIFT) method to a new automatic landmarking method.
Automatic landmarking has a number of added advantages
over manual landmarking and it is more accurate and less time
consuming especially if the dataset is large. We developed an
interactive Graphical User Interface (GUI) tool to ease the
visualization of the extract face features, which are scale and
transformation invariant. The threshold values are then
analyzed and generalized to best detect and extract important
keypoints or/and regions of facial features. The results of the
automatic extracted keypoint features are shown in this paper
Automatic landmark annotation and dense correspondence registration for 3D human facial images
Dense surface registration of three-dimensional (3D) human facial images
holds great potential for studies of human trait diversity, disease genetics,
and forensics. Non-rigid registration is particularly useful for establishing
dense anatomical correspondences between faces. Here we describe a novel
non-rigid registration method for fully automatic 3D facial image mapping. This
method comprises two steps: first, seventeen facial landmarks are automatically
annotated, mainly via PCA-based feature recognition following 3D-to-2D data
transformation. Second, an efficient thin-plate spline (TPS) protocol is used
to establish the dense anatomical correspondence between facial images, under
the guidance of the predefined landmarks. We demonstrate that this method is
robust and highly accurate, even for different ethnicities. The average face is
calculated for individuals of Han Chinese and Uyghur origins. While fully
automatic and computationally efficient, this method enables high-throughput
analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl
Fast head profile estimation using curvature, derivatives and deep learning methods
Fast estimation of head profile and posture has applications across many disciplines, for example, it can be used in sleep apnoea screening and orthodontic examination or could support a suitable physiotherapy regime. Consequently, this thesis focuses on the investigation of methods to estimate head profile and posture efficiently and accurately, and results in the development and evaluation of datasets, features and deep learning models that can achieve this. Accordingly, this thesis initially investigated properties of contour curves that could act as effective features to train machine learning models. Features based on curvature and the first and second Gaussian derivatives were evaluated. These outperformed established features used in the literature to train a long short-term memory recurrent neural network and produced a significant speedup in execution time where pre-filtering of a sampled dataset was required. Following on from this, a new dataset of head profile contours was generated and annotated with anthropometric cranio-facial landmarks, and a novel method of automatically improving the accuracy of the landmark positions was developed using ideas based on the curvature of a plane curve. The features identified here were extracted from the new head profile contour dataset and used to train long short-term recurrent neural networks.
The best network, using Gaussian derivatives features achieved an accuracy of 91% and macro F1 score of 91%, an improvement of 51% and 71% respectively when compared with the un-processed contour feature. When using Gaussian derivative features, the network was able to regress landmarks accurately with mean absolute errors ranging from 0 to 5.3 pixels and standard deviations ranging from 0 to 6.9, respectively. End-to-end machine learning approaches, where a deep neural network learns the best features to use from the raw input data, were also investigated. Such an approach, using a one-dimensional temporal convolutional network was able to match previous classifiers in terms of accuracy and macro F1 score, and showed comparable regression abilities. However, this was at the expense of increased training times and increased inference times. This network was an order of magnitude slower when classifying and regressing contours
3D Shape Descriptor-Based Facial Landmark Detection: A Machine Learning Approach
Facial landmark detection on 3D human faces has had numerous applications in the literature
such as establishing point-to-point correspondence between 3D face models which is itself a
key step for a wide range of applications like 3D face detection and authentication, matching,
reconstruction, and retrieval, to name a few.
Two groups of approaches, namely knowledge-driven and data-driven approaches, have been
employed for facial landmarking in the literature. Knowledge-driven techniques are the
traditional approaches that have been widely used to locate landmarks on human faces. In
these approaches, a user with sucient knowledge and experience usually denes features to
be extracted as the landmarks. Data-driven techniques, on the other hand, take advantage
of machine learning algorithms to detect prominent features on 3D face models. Besides
the key advantages, each category of these techniques has limitations that prevent it from
generating the most reliable results.
In this work we propose to combine the strengths of the two approaches to detect facial
landmarks in a more ecient and precise way. The suggested approach consists of two phases.
First, some salient features of the faces are extracted using expert systems. Afterwards,
these points are used as the initial control points in the well-known Thin Plate Spline (TPS)
technique to deform the input face towards a reference face model. Second, by exploring and
utilizing multiple machine learning algorithms another group of landmarks are extracted.
The data-driven landmark detection step is performed in a supervised manner providing an
information-rich set of training data in which a set of local descriptors are computed and used
to train the algorithm. We then, use the detected landmarks for establishing point-to-point
correspondence between the 3D human faces mainly using an improved version of Iterative
Closest Point (ICP) algorithms. Furthermore, we propose to use the detected landmarks for
3D face matching applications
MORPHOLOGY OF THE FACE AS A POSTMORTEM PERSONAL IDENTIFIER
The human face carries some of the most individualizing features suitable for the personal identification. Facial morphology is used for the face matching of living. An extensive research is conducted to develop the matching algorithm to mimic the human ability to recognize and match faces. Human ability to recognize and match faces, however, is not errorless and it serves as the main argument precluding the visual facial matching from its use as an identification tool.
The human face keeps its individuality after death. Compared to the faces of living, the faces of deceased are rarely used or researched for the face matching. Different factors influence the appearance of the face of the deceased compared to the face of the living, namely the early postmortem changes and decomposition process. On the other hand, the literature review showed the use of visual recognition in multiple cases of identity assessment after the natural disasters.
Presented dissertation thesis is composed of several projects focused on the possibility of personal identification of the decedents solely based on the morphology of their face. Dissertation explains the need for such identification and explores the error rates of the visual recognition of deceased, the progress of facial changes due to the early decomposition and the possibility of utilization of soft biometric traits, specifically facial moles.
Lastly, the dissertation presents the use of shape index (s) as a quality indicator of three different 3D scanners aimed towards the most suitable method for obtaining facial postmortem 3D images
Geometric Expression Invariant 3D Face Recognition using Statistical Discriminant Models
Currently there is no complete face recognition system that is invariant to all facial expressions.
Although humans find it easy to identify and recognise faces regardless of changes in illumination,
pose and expression, producing a computer system with a similar capability has proved to
be particularly di cult. Three dimensional face models are geometric in nature and therefore
have the advantage of being invariant to head pose and lighting. However they are still susceptible
to facial expressions. This can be seen in the decrease in the recognition results using
principal component analysis when expressions are added to a data set.
In order to achieve expression-invariant face recognition systems, we have employed a tensor
algebra framework to represent 3D face data with facial expressions in a parsimonious
space. Face variation factors are organised in particular subject and facial expression modes.
We manipulate this using single value decomposition on sub-tensors representing one variation
mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained
environments and still preserves the integrity of the 3D data. The results show improved
recognition rates for faces and facial expressions, even recognising high intensity expressions
that are not in the training datasets.
We have determined, experimentally, a set of anatomical landmarks that best describe facial
expression e ectively. We found that the best placement of landmarks to distinguish di erent
facial expressions are in areas around the prominent features, such as the cheeks and eyebrows.
Recognition results using landmark-based face recognition could be improved with better placement.
We looked into the possibility of achieving expression-invariant face recognition by reconstructing
and manipulating realistic facial expressions. We proposed a tensor-based statistical
discriminant analysis method to reconstruct facial expressions and in particular to neutralise
facial expressions. The results of the synthesised facial expressions are visually more realistic
than facial expressions generated using conventional active shape modelling (ASM). We
then used reconstructed neutral faces in the sub-tensor framework for recognition purposes.
The recognition results showed slight improvement. Besides biometric recognition, this novel
tensor-based synthesis approach could be used in computer games and real-time animation
applications
Facial Texture Super-Resolution by Fitting 3D Face Models
This book proposes to solve the low-resolution (LR) facial analysis problem with 3D face super-resolution (FSR). A complete processing chain is presented towards effective 3D FSR in real world. To deal with the extreme challenges of incorporating 3D modeling under the ill-posed LR condition, a novel workflow coupling automatic localization of 2D facial feature points and 3D shape reconstruction is developed, leading to a robust pipeline for pose-invariant hallucination of the 3D facial texture