10,391 research outputs found
3-D Face Analysis and Identification Based on Statistical Shape Modelling
This paper presents an effective method of statistical shape representation for automatic face analysis and identification in 3-D. The method combines statistical shape modelling techniques and the non-rigid deformation matching scheme. This work is distinguished by three key contributions. The first is the introduction of a new 3-D shape registration method using hierarchical landmark detection and multilevel B-spline warping technique, which allows accurate dense correspondence search for statistical model construction. The second is the shape representation approach, based on Laplacian Eigenmap, which provides a nonlinear submanifold that links underlying structure of facial data. The third contribution is a hybrid method for matching the statistical model and test dataset which controls the levels of the model’s deformation at different matching stages and so increases chance of the successful matching. The proposed method is tested on the public database, BU-3DFE. Results indicate that it can achieve extremely high verification rates in a series of tests, thus providing real-world practicality
Dense 3D Face Correspondence
We present an algorithm that automatically establishes dense correspondences
between a large number of 3D faces. Starting from automatically detected sparse
correspondences on the outer boundary of 3D faces, the algorithm triangulates
existing correspondences and expands them iteratively by matching points of
distinctive surface curvature along the triangle edges. After exhausting
keypoint matches, further correspondences are established by generating evenly
distributed points within triangles by evolving level set geodesic curves from
the centroids of large triangles. A deformable model (K3DM) is constructed from
the dense corresponded faces and an algorithm is proposed for morphing the K3DM
to fit unseen faces. This algorithm iterates between rigid alignment of an
unseen face followed by regularized morphing of the deformable model. We have
extensively evaluated the proposed algorithms on synthetic data and real 3D
faces from the FRGCv2, Bosphorus, BU3DFE and UND Ear databases using
quantitative and qualitative benchmarks. Our algorithm achieved dense
correspondences with a mean localisation error of 1.28mm on synthetic faces and
detected anthropometric landmarks on unseen real faces from the FRGCv2
database with 3mm precision. Furthermore, our deformable model fitting
algorithm achieved 98.5% face recognition accuracy on the FRGCv2 and 98.6% on
Bosphorus database. Our dense model is also able to generalize to unseen
datasets.Comment: 24 Pages, 12 Figures, 6 Tables and 3 Algorithm
Algorithms to automatically quantify the geometric similarity of anatomical surfaces
We describe new approaches for distances between pairs of 2-dimensional
surfaces (embedded in 3-dimensional space) that use local structures and global
information contained in inter-structure geometric relationships. We present
algorithms to automatically determine these distances as well as geometric
correspondences. This is motivated by the aspiration of students of natural
science to understand the continuity of form that unites the diversity of life.
At present, scientists using physical traits to study evolutionary
relationships among living and extinct animals analyze data extracted from
carefully defined anatomical correspondence points (landmarks). Identifying and
recording these landmarks is time consuming and can be done accurately only by
trained morphologists. This renders these studies inaccessible to
non-morphologists, and causes phenomics to lag behind genomics in elucidating
evolutionary patterns. Unlike other algorithms presented for morphological
correspondences our approach does not require any preliminary marking of
special features or landmarks by the user. It also differs from other seminal
work in computational geometry in that our algorithms are polynomial in nature
and thus faster, making pairwise comparisons feasible for significantly larger
numbers of digitized surfaces. We illustrate our approach using three datasets
representing teeth and different bones of primates and humans, and show that it
leads to highly accurate results.Comment: Changes with respect to v1, v2: an Erratum was added, correcting the
references for one of the three datasets. Note that the datasets and code for
this paper can be obtained from the Data Conservancy (see Download column on
v1, v2
Fully Automatic Expression-Invariant Face Correspondence
We consider the problem of computing accurate point-to-point correspondences
among a set of human face scans with varying expressions. Our fully automatic
approach does not require any manually placed markers on the scan. Instead, the
approach learns the locations of a set of landmarks present in a database and
uses this knowledge to automatically predict the locations of these landmarks
on a newly available scan. The predicted landmarks are then used to compute
point-to-point correspondences between a template model and the newly available
scan. To accurately fit the expression of the template to the expression of the
scan, we use as template a blendshape model. Our algorithm was tested on a
database of human faces of different ethnic groups with strongly varying
expressions. Experimental results show that the obtained point-to-point
correspondence is both highly accurate and consistent for most of the tested 3D
face models
Learning based automatic face annotation for arbitrary poses and expressions from frontal images only
Statistical approaches for building non-rigid deformable models, such as the active appearance model (AAM), have enjoyed great popularity in recent years, but typically require tedious manual annotation of training images. In this paper, a learning based approach for the automatic annotation of visually deformable objects from a single annotated frontal image is presented and demonstrated on the example of automatically annotating face images that can be used for building AAMs for fitting and tracking. This approach employs the idea of initially learning the correspondences between landmarks in a frontal image and a set of training images with a face in arbitrary poses. Using this learner, virtual images of unseen faces at any arbitrary pose for which the learner was trained can be reconstructed by predicting the new landmark locations and warping the texture from the frontal image. View-based AAMs are then built from the virtual images and used for automatically annotating unseen images, including images of different facial expressions, at any random pose within the maximum range spanned by the virtually reconstructed images. The approach is experimentally validated by automatically annotating face images from three different databases
Anatomical landmark based registration of contrast enhanced T1-weighted MR images
In many problems involving multiple image analysis, an im- age registration step is required. One such problem appears in brain tumor imaging, where baseline and follow-up image volumes from a tu- mor patient are often to-be compared. Nature of the registration for a change detection problem in brain tumor growth analysis is usually rigid or affine. Contrast enhanced T1-weighted MR images are widely used in clinical practice for monitoring brain tumors. Over this modality, con- tours of the active tumor cells and whole tumor borders and margins are visually enhanced. In this study, a new technique to register serial contrast enhanced T1 weighted MR images is presented. The proposed fully-automated method is based on five anatomical landmarks: eye balls, nose, confluence of sagittal sinus, and apex of superior sagittal sinus. Af- ter extraction of anatomical landmarks from fixed and moving volumes, an affine transformation is estimated by minimizing the sum of squared distances between the landmark coordinates. Final result is refined with a surface registration, which is based on head masks confined to the sur- face of the scalp, as well as to a plane constructed from three of the extracted features. The overall registration is not intensity based, and it depends only on the invariant structures. Validation studies using both synthetically transformed MRI data, and real MRI scans, which included several markers over the head of the patient were performed. In addition, comparison studies against manual landmarks marked by a radiologist, as well as against the results obtained from a typical mutual information based method were carried out to demonstrate the effectiveness of the proposed method
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Peaks detection and alignment for mass spectrometry data
The goal of this paper is to review existing methods for protein mass spectrometry data analysis, and to present a new methodology for automatic extraction of significant peaks (biomarkers). For the pre-processing step required for data from MALDI-TOF or SELDI- TOF spectra, we use a purely nonparametric approach that combines stationary invariant wavelet transform for noise removal and penalized spline quantile regression for baseline correction. We further present a multi-scale spectra alignment technique that is based on identification of statistically significant peaks from a set of spectra. This method allows one to find common peaks in a set of spectra that can subsequently be mapped to individual proteins. This may serve as useful biomarkers in medical applications, or as individual features for further multidimensional statistical analysis. MALDI-TOF spectra obtained from serum samples are used throughout the paper to illustrate the methodology
- …