8 research outputs found
The low-level guidance of an experimental autonomous vehicle
This thesis describes the data processing and the control that constitutes a method of guidance for an autonomous guided vehicle (AGV) operating in a predefined and structured environment such as a warehouse or factory. A simple battery driven vehicle has been constructed which houses an MC68000 based microcomputer and a number of electronic interface cards. In order to provide a user interface, and in order to integrate the various aspects of the proposed guidance method, a modular software package has been developed. This, along with the research vehicle, has been used to support an experimental approach to the research. The vehicle's guidance method requires a series of concatenated curved and straight imaginary Unes to be passed to the vehicle as a representation of a planned path within its environment. Global position specifications for each line and the associated AGV direction and demand speed for each fine constitute commands which are queued and executed in sequence. In order to execute commands, the AGV is equipped with low level sensors (ultrasonic transducers and optical shaft encoders) which allow it to estimate and correct its global position continually. In addition to a queue of commands, the AGV also has a pre-programmed knowledge of the position of a number of correction boards within its environment. These are simply wooden boards approximately 25cm high and between 2 and 5 metres long with small protrusions ("notches") 4cm deep and 10cm long at regular (Im) intervals along its length. When the AGV passes such a correction board, it can measure its perpendicular distance and orientation relative to that board using two sets of its ultrasonic sensors, one set at the rear of the vehicle near to the drive wheels and one set at the front of the vehicle. Data collected as the vehicle moves parallel to a correction board is digitally filtered and subsequently a least squares line fitting procedure is adopted. As well as improving the reliability and accuracy of orientation and distance measurements relative to the board, this provides the basis for an algorithm with which to detect and measure the position of the protrusions on the correction board. Since measurements in three planar, local coordinates can be made (these are: x, the distance travelled parallel to a correction board; and y,the perpendicular distance relative to a correction board; and ÆŸ, the clockwise planar orientation relative to the correction board), global position estimation can be corrected. When position corrections are made, it can be seen that they appear as step disturbances to the control system. This control system has been designed to allow the vehicle to move back onto its imaginary line after a position correction in a critically damped fashion and, in the steady state, to track both linear and curved command segments with minimum error
Modelling of Orthogonal Craniofacial Profiles
We present a fully-automatic image processing pipeline to build a set of 2D morphable models of three craniofacial profiles from orthogonal viewpoints, side view, front view and top view, using a set of 3D head surface images. Subjects in this dataset wear a close-fitting latex cap to reveal the overall skull shape. Texture-based 3D pose normalization and facial landmarking are applied to extract the profiles from 3D raw scans. Fully-automatic profile annotation, subdivision and registration methods are used to establish dense correspondence among sagittal profiles. The collection of sagittal profiles in dense correspondence are scaled and aligned using Generalised Procrustes Analysis (GPA), before applying principal component analysis to generate a morphable model. Additionally, we propose a new alternative alignment called the Ellipse Centre Nasion (ECN) method. Our model is used in a case study of craniosynostosis intervention outcome evaluation, and the evaluation reveals that the proposed model achieves state-of-the-art results. We make publicly available both the morphable models and the profile dataset used to construct it
A Data-augmented 3D Morphable Model of the Ear
Morphable models are useful shape priors for biometric recognition tasks. Here we present an iterative process of refinement for a 3D Morphable Model (3DMM) of the human ear that employs data augmentation. The process employs the following stages 1) landmark-based 3DMM fitting; 2) 3D template deformation to overcome noisy over-fitting; 3) 3D mesh editing, to improve the fit to manual 2D landmarks. These processes are wrapped in an iterative procedure that is able to bootstrap a weak, approximate model into a significantly better model. Evaluations using several performance metrics verify the improvement of our model using the proposed algorithm. We use this new 3DMM model-booting algorithm to generate a refined 3D morphable model of the human ear, and we make this new model and our augmented training dataset public
Symmetric Shape Morphing for 3D Face and Head Modelling
We propose a shape template morphing approach suitable for any class of shapes that exhibits approximate reflective symmetry over some plane. The human face and full head are examples. A shape morphing algorithm that constrains all morphs to be symmetric is a form of deformation regulation. This mitigates undesirable effects seen in standard morphing algorithms that are not symmetry-aware, such as tangential sliding. Our method builds on the Coherent Point Drift (CPD) algorithm and is called Symmetry-aware CPD (SA-CPD). Global symmetric deformations are obtained by removal of asymmetric shear from CPD's global affine transformations. Symmetrised local deformations are then used to improve the symmetric template fit. These symmetric deformations are followed by Laplace-Beltrami regularized projection which allows the shape template to fit to any asymmetries in the raw shape data. The pipeline facilitates construction of statistical models that are readily factored into symmetrical and asymmetrical components. Evaluations demonstrate that SA-CPD mitigates tangential sliding problem in CPD and outperforms other competing shape morphing methods, in some cases substantially. 3D morphable models are constructed from over 1200 full head scans, and we evaluate the constructed models in terms of age and gender classification. The best performance, in the context of SVM classification, is achieved using the proposed SA-CPD deformation algorithm
Statistical Modeling of Craniofacial Shape and Texture
We present a fully-automatic statistical 3D shape modeling approach and apply it to a large dataset of 3D images, the Headspace dataset, thus generating the first public shape-and-texture 3D Morphable Model (3DMM) of the full human head. Our approach is the first to employ a template that adapts to the dataset subject before dense morphing. This is fully automatic and achieved using 2D facial landmarking, projection to 3D shape, and mesh editing. In dense template morphing, we improve on the well-known Coherent Point Drift algorithm, by incorporating iterative data-sampling and alignment. Our evaluations demonstrate that our method has better performance in correspondence accuracy and modeling ability when compared with other competing algorithms. We propose a texture map refinement scheme to build high quality texture maps and texture model. We present several applications that include the first clinical use of craniofacial 3DMMs in the assessment of different types of surgical intervention applied to a craniosynostosis patient group
A 3D Morphable Model of Craniofacial Shape and Texture Variation
We present a fully automatic pipeline to train 3D Mor- phable Models (3DMMs), with contributions in pose nor- malisation, dense correspondence using both shape and texture information, and high quality, high resolution tex- ture mapping. We propose a dense correspondence system, combining a hierarchical parts-based template morphing framework in the shape channel and a refining optical flow in the texture channel. The texture map is generated us- ing raw texture images from five views. We employ a pixel- embedding method to maintain the texture map at the same high resolution as the raw texture images, rather than us- ing per-vertex color maps. The high quality texture map is then used for statistical texture modelling. The Headspace dataset used for training includes demographic information about each subject, allowing for the construction of both global 3DMMs and models tailored for specific gender and age groups. We build both global craniofacial 3DMMs and demographic sub-population 3DMMs from more than 1200 distinct identities. To our knowledge, we present the first public 3DMM of the full human head in both shape and texture: the Liverpool-York Head Model. Furthermore, we analyse the 3DMMs in terms of a range of performance metrics. Our evaluations reveal that the training pipeline constructs state-of-the-art models
Towards a complete 3D morphable model of the human head
Three-dimensional Morphable Models (3DMMs) are powerful statistical tools for
representing the 3D shapes and textures of an object class. Here we present the
most complete 3DMM of the human head to date that includes face, cranium, ears,
eyes, teeth and tongue. To achieve this, we propose two methods for combining
existing 3DMMs of different overlapping head parts: i. use a regressor to
complete missing parts of one model using the other, ii. use the Gaussian
Process framework to blend covariance matrices from multiple models. Thus we
build a new combined face-and-head shape model that blends the variability and
facial detail of an existing face model (the LSFM) with the full head modelling
capability of an existing head model (the LYHM). Then we construct and fuse a
highly-detailed ear model to extend the variation of the ear shape. Eye and eye
region models are incorporated into the head model, along with basic models of
the teeth, tongue and inner mouth cavity. The new model achieves
state-of-the-art performance. We use our model to reconstruct full head
representations from single, unconstrained images allowing us to parameterize
craniofacial shape and texture, along with the ear shape, eye gaze and eye
color.Comment: 18 pages, 18 figures, submitted to Transactions on Pattern Analysis
and Machine Intelligence (TPAMI) on the 9th of October as an extension paper
of the original oral CVPR paper : arXiv:1903.0378