396 research outputs found

    Visual ageing of human faces in three dimensions using morphable models and projection to latent structures

    Get PDF
    We present an approach to synthesising the effects of ageing on human face images using three-dimensional modelling. We extract a set of three dimensional face models from a set of two-dimensional face images by fitting a Morphable Model. We propose a method to age these face models using Partial Least Squares to extract from the data-set those factors most related to ageing. These ageing related factors are used to train an individually weighted linear model. We show that this is an effective means of producing an aged face image and compare this method to two other linear ageing methods for ageing face models. This is demonstrated both quantitatively and with perceptual evaluation using human raters.Postprin

    Three-dimensional morphanalysis of the face.

    Get PDF
    The aim of the work reported in this thesis was to determine the extent to which orthogonal two-dimensional morphanalytic (universally relatable) craniofacial imaging methods can be extended into the realm of computer-based three-dimensional imaging. New methods are presented for capturing universally relatable laser-video surface data, for inter-relating facial surface scans and for constructing probabilistic facial averages. Universally relatable surface scans are captured using the fixed relations principle com- bined with a new laser-video scanner calibration method. Inter- subject comparison of facial surface scans is achieved using inter- active feature labelling and warping methods. These methods have been extended to groups of subjects to allow the construction of three-dimensional probabilistic facial averages. The potential of universally relatable facial surface data for applications such as growth studies and patient assessment is demonstrated. In addition, new methods for scattered data interpolation, for controlling overlap in image warping and a fast, high-resolution method for simulating craniofacial surgery are described. The results demonstrate that it is not only possible to extend universally relatable imaging into three dimensions, but that the extension also enhances the established methods, providing a wide range of new applications

    Lexical Borrowing in the Middle English period: A multi-domain analysis of semantic outcomes

    Get PDF
    The Middle English period is well known as one of widespread lexical borrowing from French and Latin, and scholarly accounts traditionally assume that this influx of loanwords caused many native terms to shift in sense or to drop out of use entirely. The study analyses an extensive dataset, tracking patterns in lexical retention, replacement and semantic change, and comparing long-term outcomes for both native and non-native words. Our results challenge the conventional view of competition between existing terms and foreign incomers. They show that there were far fewer instances of relexification, and far more of synonymy, during the Middle English period than might have been expected. When retention rates for words first attested between 1100-1500 are compared, it is loanwords, not native terms, which are more likely to become obsolete at any point up to the nineteenth century. Furthermore, proportions of outcomes involving narrowing and broadening (often considered common outcomes following the arrival of a co-hyponym in a semantic space) were low in the Middle English period, regardless of language of origin

    Enhancing active vision system categorization capability through uniform local binary patterns

    Get PDF
    Previous research in Neuro-Evolution controlled Active Vision Systems has shown its potential to solve various shape categorization and discrimination problems. However, minimal investigation has been done in using this kind of evolved system in solving more complex vision problems. This is partly due to variability in lighting conditions, reflection, shadowing etc., which may be inherent to these kinds of problems. It could also be due to the fact that building an evolved system for these kinds of problems may be too computationally expensive. We present an Active Vision System controlled Neural Network trained by a Genetic Algorithm that can autonomously scan through an image pre-processed by Uniform Local Binary Patterns [8]. We demonstrate the ability of this system to categorize more complex images taken from the camera of a Humanoid (iCub) robot. Preliminary investigation results show that the proposed Uniform Local Binary Pattern [8] method performed better than the gray-scale averaging method of [1] in the categorization tasks. This approach provides a framework that could be used for further research in using this kind of system for more complex image problems

    FFD:Fast Feature Detector

    Get PDF
    Scale-invariance, good localization and robustness to noise and distortions are the main properties that a local feature detector should possess. Most existing local feature detectors find excessive unstable feature points that increase the number of keypoints to be matched and the computational time of the matching step. In this paper, we show that robust and accurate keypoints exist in the specific scale-space domain. To this end, we first formulate the superimposition problem into a mathematical model and then derive a closed-form solution for multiscale analysis. The model is formulated via difference-of-Gaussian (DoG) kernels in the continuous scale-space domain, and it is proved that setting the scale-space pyramid's blurring ratio and smoothness to 2 and 0.627, respectively, facilitates the detection of reliable keypoints. For the applicability of the proposed model to discrete images, we discretize it using the undecimated wavelet transform and the cubic spline function. Theoretically, the complexity of our method is less than 5\% of that of the popular baseline Scale Invariant Feature Transform (SIFT). Extensive experimental results show the superiority of the proposed feature detector over the existing representative hand-crafted and learning-based techniques in accuracy and computational time. The code and supplementary materials can be found at~{\url{https://github.com/mogvision/FFD}}

    A Wide Area Multiview Static Crowd Estimation System Using UAV and 3D Training Simulator

    Get PDF
    Crowd size estimation is a challenging problem, especially when the crowd is spread over a significant geographical area. It has applications in monitoring of rallies and demonstrations and in calculating the assistance requirements in humanitarian disasters. Therefore, accomplishing a crowd surveillance system for large crowds constitutes a significant issue. UAV-based techniques are an appealing choice for crowd estimation over a large region, but they present a variety of interesting challenges, such as integrating per-frame estimates through a video without counting individuals twice. Large quantities of annotated training data are required to design, train, and test such a system. In this paper, we have first reviewed several crowd estimation techniques, existing crowd simulators and data sets available for crowd analysis. Later, we have described a simulation system to provide such data, avoiding the need for tedious and error-prone manual annotation. Then, we have evaluated synthetic video from the simulator using various existing single-frame crowd estimation techniques. Our findings show that the simulated data can be used to train and test crowd estimation, thereby providing a suitable platform to develop such techniques. We also propose an automated UAV-based 3D crowd estimation system that can be used for approximately static or slow-moving crowds, such as public events, political rallies, and natural or man-made disasters. We evaluate the results by applying our new framework to a variety of scenarios with varying crowd sizes. The proposed system gives promising results using widely accepted metrics including MAE, RMSE, Precision, Recall, and F1 score to validate the results
    corecore