302 research outputs found
Dense 3D Face Correspondence
We present an algorithm that automatically establishes dense correspondences
between a large number of 3D faces. Starting from automatically detected sparse
correspondences on the outer boundary of 3D faces, the algorithm triangulates
existing correspondences and expands them iteratively by matching points of
distinctive surface curvature along the triangle edges. After exhausting
keypoint matches, further correspondences are established by generating evenly
distributed points within triangles by evolving level set geodesic curves from
the centroids of large triangles. A deformable model (K3DM) is constructed from
the dense corresponded faces and an algorithm is proposed for morphing the K3DM
to fit unseen faces. This algorithm iterates between rigid alignment of an
unseen face followed by regularized morphing of the deformable model. We have
extensively evaluated the proposed algorithms on synthetic data and real 3D
faces from the FRGCv2, Bosphorus, BU3DFE and UND Ear databases using
quantitative and qualitative benchmarks. Our algorithm achieved dense
correspondences with a mean localisation error of 1.28mm on synthetic faces and
detected anthropometric landmarks on unseen real faces from the FRGCv2
database with 3mm precision. Furthermore, our deformable model fitting
algorithm achieved 98.5% face recognition accuracy on the FRGCv2 and 98.6% on
Bosphorus database. Our dense model is also able to generalize to unseen
datasets.Comment: 24 Pages, 12 Figures, 6 Tables and 3 Algorithm
Fully Automatic Expression-Invariant Face Correspondence
We consider the problem of computing accurate point-to-point correspondences
among a set of human face scans with varying expressions. Our fully automatic
approach does not require any manually placed markers on the scan. Instead, the
approach learns the locations of a set of landmarks present in a database and
uses this knowledge to automatically predict the locations of these landmarks
on a newly available scan. The predicted landmarks are then used to compute
point-to-point correspondences between a template model and the newly available
scan. To accurately fit the expression of the template to the expression of the
scan, we use as template a blendshape model. Our algorithm was tested on a
database of human faces of different ethnic groups with strongly varying
expressions. Experimental results show that the obtained point-to-point
correspondence is both highly accurate and consistent for most of the tested 3D
face models
Robust signatures for 3D face registration and recognition
PhDBiometric authentication through face recognition has been an active area of
research for the last few decades, motivated by its application-driven demand. The popularity
of face recognition, compared to other biometric methods, is largely due to its
minimum requirement of subject co-operation, relative ease of data capture and similarity
to the natural way humans distinguish each other.
3D face recognition has recently received particular interest since three-dimensional
face scans eliminate or reduce important limitations of 2D face images, such as illumination
changes and pose variations. In fact, three-dimensional face scans are usually captured
by scanners through the use of a constant structured-light source, making them invariant
to environmental changes in illumination. Moreover, a single 3D scan also captures the
entire face structure and allows for accurate pose normalisation.
However, one of the biggest challenges that still remain in three-dimensional face
scans is the sensitivity to large local deformations due to, for example, facial expressions.
Due to the nature of the data, deformations bring about large changes in the 3D geometry
of the scan. In addition to this, 3D scans are also characterised by noise and artefacts such
as spikes and holes, which are uncommon with 2D images and requires a pre-processing
stage that is speci c to the scanner used to capture the data.
The aim of this thesis is to devise a face signature that is compact in size and
overcomes the above mentioned limitations. We investigate the use of facial regions and
landmarks towards a robust and compact face signature, and we study, implement and
validate a region-based and a landmark-based face signature. Combinations of regions and
landmarks are evaluated for their robustness to pose and expressions, while the matching
scheme is evaluated for its robustness to noise and data artefacts
Effective 3D Geometric Matching for Data Restoration and Its Forensic Application
3D geometric matching is the technique to detect the similar patterns among multiple objects. It is an important and fundamental problem and can facilitate many tasks in computer graphics and vision, including shape comparison and retrieval, data fusion, scene understanding and object recognition, and data restoration. For example, 3D scans of an object from different angles are matched and stitched together to form the complete geometry. In medical image analysis, the motion of deforming organs is modeled and predicted by matching a series of CT images. This problem is challenging and remains unsolved, especially when the similar patterns are 1) small and lack geometric saliency; 2) incomplete due to the occlusion of the scanning and damage of the data. We study the reliable matching algorithm that can tackle the above difficulties and its application in data restoration. Data restoration is the problem to restore the fragmented or damaged model to its original complete state. It is a new area and has direct applications in many scientific fields such as Forensics and Archeology. In this dissertation, we study novel effective geometric matching algorithms, including curve matching, surface matching, pairwise matching, multi-piece matching and template matching. We demonstrate its applications in an integrated digital pipeline of skull reassembly, skull completion, and facial reconstruction, which is developed to facilitate the state-of-the-art forensic skull/facial reconstruction processing pipeline in law enforcement
Facial scan change detection
We present a method for quantifying and localising changes in two facial scans of the same person taken at two different time instants. The method is based on rigid registration and semantic feature extraction, followed by discrepancy computation. The proposed method combines the Landmark Transform (LT) method, which is applied on semantic feature points, and the Iterative Closest Point (ICP) algorithm, which is performed on semantic regions. Finally, the discrepancy between the two scans is computed using the Symmetric Hausdorff distance. Experimental results with both synthetic and real data show the effectiveness of the proposed method which has also been validated by an experienced clinical scientist. Moreover, the method is being used as support in clinical studies on a 3D object database with more than 1000 facial scans.
Shape classification: towards a mathematical description of the face
Recent advances in biostereometric techniques have led to the quick and easy
acquisition of 3D data for facial and other biological surfaces. This has led facial
surgeons to express dissatisfaction with landmark-based methods for analysing the
shape of the face which use only a small part of the data available, and to seek a method
for analysing the face which maximizes the use of this extensive data set. Scientists
working in the field of computer vision have developed a variety of methods for the
analysis and description of 2D and 3D shape. These methods are reviewed and an
approach, based on differential geometry, is selected for the description of facial shape.
For each data point, the Gaussian and mean curvatures of the surface are calculated.
The performance of three algorithms for computing these curvatures are evaluated for
mathematically generated standard 3D objects and for 3D data obtained from an optical
surface scanner. Using the signs of these curvatures, the face is classified into eight
'fundamental surface types' - each of which has an intuitive perceptual meaning. The
robustness of the resulting surface type description to errors in the data is determined
together with its repeatability.
Three methods for comparing two surface type descriptions are presented and illustrated
for average male and average female faces. Thus a quantitative description of facial
change, or differences between individual's faces, is achieved. The possible application
of artificial intelligence techniques to automate this comparison is discussed. The
sensitivity of the description to global and local changes to the data, made by
mathematical functions, is investigated.
Examples are given of the application of this method for describing facial changes
made by facial reconstructive surgery and implications for defining a basis for facial
aesthetics using shape are discussed. It is also applied to investigate the role played by
the shape of the surface in facial recognition
Evaluation System for Craniosynostosis Surgeries with Computer Simulation and Statistical Modelling
Craniosynostosis is a pathology in infants when one or more sutures prematurely closed, leading to abnormal skull shape. It has been classified according to the specific suture that has been closed, each of which has a typical skull shape. Surgery is the common treatment to correct the deformed skull shape and to reduce the excessive intracranial pressure. Since every case is unique, the cranial facial teams have difficulties to select an optimum solution for a specific patient from multiple options. In addition, there is not an appropriate quantified measurement existed currently to help cranial facial team to quantitatively evaluate their surgeries.
We aimed to develop a head model of a craniosynostosis patient, which allows neurosurgeons to perform any potential surgeries on it so as to simulate the postoperative head development. Therefore, neurosurgeons could foresee the surgical results and is able to select the optimal one. In this thesis, we have developed a normal head model, and built mathematical models for possible dynamic behaviors. We also modified this model by closing one or two sutures to simulate common types of craniosynostosis. The abnormal simulation results showed a qualitative match with real cases and the normal simulation indicated a higher growth rate of cranial index than clinical data. We believed that this discrepancy caused by the rigidity of our skull plates, which will be adapted to deformable object in the future.
In order to help neurosurgeons to better evaluate a surgery, we hope to develop an algorithm to quantify the level of deformity of a skull. We have designed a set of work flow and targeted curvatures as the key role. A training data was carefully selected to search for an optimal system to characterize different shapes. A set of test data was used to validate our algorithm to assess the performance of the optimal system. With a stable evaluating system, we can evaluate a surgery by comparing the preoperative and postoperative skulls from the patient. An effective surgery can be considered if the postoperative skull shifted toward normal shape from preoperative shape
- β¦