15,838 research outputs found

    From 3D Point Clouds to Pose-Normalised Depth Maps

    Get PDF
    We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data)

    Rotationally invariant 3D shape contexts using asymmetry patterns

    Get PDF
    This paper presents an approach to resolve the azimuth ambiguity of 3D Shape Contexts (3DSC) based on asymmetry patterns. We show that it is possible to provide rotational invariance to 3DSC at the expense of a marginal increase in computational load, outperforming previous algorithms dealing with the azimuth ambiguity. We build on a recently presented measure of approximate rotational symmetry in 2D defined as the overlapping area between a shape and rotated versions of itself to extract asymmetry patterns from a 3DSC in a variety of ways, depending on the spatial relationships that need to be highlighted or disabled. Thus, we define Asymmetry Patterns Shape Contexts (APSC) from a subset of the possible spatial relations present in the spherical grid of 3DSC; hence they can be thought of as a family of descriptors that depend on the subset that is selected. This provides great flexibility to derive different descriptors. We show that choosing the appropriate spatial patterns can considerably reduce the errors obtained with 3DSC when targeting specific types of points

    Automatic landmark annotation and dense correspondence registration for 3D human facial images

    Full text link
    Dense surface registration of three-dimensional (3D) human facial images holds great potential for studies of human trait diversity, disease genetics, and forensics. Non-rigid registration is particularly useful for establishing dense anatomical correspondences between faces. Here we describe a novel non-rigid registration method for fully automatic 3D facial image mapping. This method comprises two steps: first, seventeen facial landmarks are automatically annotated, mainly via PCA-based feature recognition following 3D-to-2D data transformation. Second, an efficient thin-plate spline (TPS) protocol is used to establish the dense anatomical correspondence between facial images, under the guidance of the predefined landmarks. We demonstrate that this method is robust and highly accurate, even for different ethnicities. The average face is calculated for individuals of Han Chinese and Uyghur origins. While fully automatic and computationally efficient, this method enables high-throughput analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Sectional Curvature in terms of the Cometric, with Applications to the Riemannian Manifolds of Landmarks

    Full text link
    This paper deals with the computation of sectional curvature for the manifolds of NN landmarks (or feature points) in D dimensions, endowed with the Riemannian metric induced by the group action of diffeomorphisms. The inverse of the metric tensor for these manifolds (i.e. the cometric), when written in coordinates, is such that each of its elements depends on at most 2D of the ND coordinates. This makes the matrices of partial derivatives of the cometric very sparse in nature, thus suggesting solving the highly non-trivial problem of developing a formula that expresses sectional curvature in terms of the cometric and its first and second partial derivatives (we call this Mario's formula). We apply such formula to the manifolds of landmarks and in particular we fully explore the case of geodesics on which only two points have non-zero momenta and compute the sectional curvatures of 2-planes spanned by the tangents to such geodesics. The latter example gives insight to the geometry of the full manifolds of landmarks.Comment: 30 pages, revised version, typos correcte

    Anatomical landmark based registration of contrast enhanced T1-weighted MR images

    Get PDF
    In many problems involving multiple image analysis, an im- age registration step is required. One such problem appears in brain tumor imaging, where baseline and follow-up image volumes from a tu- mor patient are often to-be compared. Nature of the registration for a change detection problem in brain tumor growth analysis is usually rigid or affine. Contrast enhanced T1-weighted MR images are widely used in clinical practice for monitoring brain tumors. Over this modality, con- tours of the active tumor cells and whole tumor borders and margins are visually enhanced. In this study, a new technique to register serial contrast enhanced T1 weighted MR images is presented. The proposed fully-automated method is based on five anatomical landmarks: eye balls, nose, confluence of sagittal sinus, and apex of superior sagittal sinus. Af- ter extraction of anatomical landmarks from fixed and moving volumes, an affine transformation is estimated by minimizing the sum of squared distances between the landmark coordinates. Final result is refined with a surface registration, which is based on head masks confined to the sur- face of the scalp, as well as to a plane constructed from three of the extracted features. The overall registration is not intensity based, and it depends only on the invariant structures. Validation studies using both synthetically transformed MRI data, and real MRI scans, which included several markers over the head of the patient were performed. In addition, comparison studies against manual landmarks marked by a radiologist, as well as against the results obtained from a typical mutual information based method were carried out to demonstrate the effectiveness of the proposed method
    corecore