1,894 research outputs found
Statistical/Geometric Techniques for Object Representation and Recognition
Object modeling and recognition are key areas of research in computer vision and graphics with wide range of applications. Though research in these areas is not new, traditionally most of it has focused on analyzing problems under controlled environments. The challenges posed by real life applications demand for more general and robust solutions. The wide variety of objects with large intra-class variability makes the task very challenging. The difficulty in modeling and matching objects also vary depending on the input modality. In addition, the easy availability of sensors and storage have resulted in tremendous increase in the amount of data that needs to be processed which requires efficient algorithms suitable for large-size databases. In this dissertation, we address some of the challenges involved in modeling and matching of objects in realistic scenarios.
Object matching in images require accounting for large variability in the appearance due to changes in illumination and view point. Any real world object is characterized by its underlying shape and albedo, which unlike the image intensity are insensitive to changes in illumination conditions. We propose a stochastic filtering framework for estimating object albedo from a single intensity image by formulating the albedo estimation as an image estimation problem. We also show how this albedo estimate can be used for illumination insensitive object matching and for more accurate shape recovery from a single image using standard shape from shading formulation. We start with the simpler problem where the pose of the object is known and only the illumination varies. We then extend the proposed approach to handle unknown pose in addition to illumination variations. We also use the estimated albedo maps for another important application, which is recognizing faces across age progression.
Many approaches which address the problem of modeling and recognizing objects from images assume that the underlying objects are of diffused texture. But most real world objects exhibit a combination of diffused and specular properties. We propose an approach for separating the diffused and specular reflectance from a given color image so that the algorithms proposed for objects of diffused texture become applicable to a much wider range of real world objects.
Representing and matching the 2D and 3D geometry of objects is also an integral part of object matching with applications in gesture recognition, activity classification, trademark and logo recognition, etc. The challenge in matching 2D/3D shapes lies in accounting for the different rigid and non-rigid deformations, large intra-class variability, noise and outliers. In addition, since shapes are usually represented as a collection of landmark points, the shape matching algorithm also has to deal with the challenges of missing or unknown correspondence across these data points. We propose an efficient shape indexing approach where the different feature vectors representing the shape are mapped to a hash table. For a query shape, we show how the similar shapes in the database can be efficiently retrieved without the need for establishing correspondence making the algorithm extremely fast and scalable. We also propose an approach for matching and registration of 3D point cloud data across unknown or missing correspondence using
an implicit surface representation. Finally, we discuss possible future directions of this research
Age Progression and Regression with Spatial Attention Modules
Age progression and regression refers to aesthetically render-ing a given
face image to present effects of face aging and rejuvenation, respectively.
Although numerous studies have been conducted in this topic, there are two
major problems: 1) multiple models are usually trained to simulate different
age mappings, and 2) the photo-realism of generated face images is heavily
influenced by the variation of training images in terms of pose, illumination,
and background. To address these issues, in this paper, we propose a framework
based on conditional Generative Adversarial Networks (cGANs) to achieve age
progression and regression simultaneously. Particularly, since face aging and
rejuvenation are largely different in terms of image translation patterns, we
model these two processes using two separate generators, each dedicated to one
age changing process. In addition, we exploit spatial attention mechanisms to
limit image modifications to regions closely related to age changes, so that
images with high visual fidelity could be synthesized for in-the-wild cases.
Experiments on multiple datasets demonstrate the ability of our model in
synthesizing lifelike face images at desired ages with personalized features
well preserved, and keeping age-irrelevant regions unchanged
Automatic landmark annotation and dense correspondence registration for 3D human facial images
Dense surface registration of three-dimensional (3D) human facial images
holds great potential for studies of human trait diversity, disease genetics,
and forensics. Non-rigid registration is particularly useful for establishing
dense anatomical correspondences between faces. Here we describe a novel
non-rigid registration method for fully automatic 3D facial image mapping. This
method comprises two steps: first, seventeen facial landmarks are automatically
annotated, mainly via PCA-based feature recognition following 3D-to-2D data
transformation. Second, an efficient thin-plate spline (TPS) protocol is used
to establish the dense anatomical correspondence between facial images, under
the guidance of the predefined landmarks. We demonstrate that this method is
robust and highly accurate, even for different ethnicities. The average face is
calculated for individuals of Han Chinese and Uyghur origins. While fully
automatic and computationally efficient, this method enables high-throughput
analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl
Characterization and Classification of Faces across Age Progression
Facial aging, a new dimension that has recently been added to the problem of face recognition, poses interesting theoretical and practical challenges to the research community . How do humans perceive age ? What constitutes an age-invariant signature for faces ? How do we model facial growth across different ages ? How does facial aging effects impact recognition performance ? This thesis provides a thorough overview of the problem of facial aging and addresses the aforementioned questions.
We propose a craniofacial growth model that characterizes growth related shape variations observed in human faces during formative years (0 - 18 yrs). The craniofacial growth model draws inspiration from the `revised' cardioidal strain transformation model proposed in psychophysics and further, incorporates age-based anthropometric evidences collected on facial growth during formative years. Identifying a set of fiducial features on faces, we characterize facial growth by means of growth parameters estimated on the fiducial features. We illustrate how the growth related transformations observed on facial proportions can be studied by means of linear and non-linear equations in facial growth parameters, which subsequently help in computing the growth parameters. The proposed growth model implicitly accounts for factors such as gender, ethnicity, the individual's age group etc. Predicting one's appearance across ages, performing face verification across ages etc. are some of the intended applications of the model.
Next, we propose a two-fold approach towards modeling facial aging in adults. Firstly, we develop a shape transformation model that is formulated as a physically-based parametric muscle model that captures the subtle deformations facial features undergo with age. The model implicitly accounts for the physical properties and geometric orientations of the individual facial muscles. Next, we develop an image gradient based texture transformation function that characterizes facial wrinkles and other skin artifacts often observed during different ages. Facial growth statistics (both in terms of shape and texture) play a crucial role in developing the aforementioned transformation models. From a database that comprises of pairs of age separated face images of many individuals, we extract age-based facial measurements across key fiducial features and further, study textural variations across ages. We present experimental results that illustrate the applications of the proposed facial aging model in tasks such as face verification and facial appearance prediction across aging.
How sensitive are face verification systems to facial aging effects ? How does age progression affect the similarity between a pair of face images of an individual ? We develop a Bayesian age difference classifier that classifies face images of individuals based on age differences and performs face verification across age progression. Further, we study the similarity of faces across age progression. Since age separated face images invariably differ in illumination and pose, we propose pre-processing methods for minimizing such variations. Experimental results using a database comprising of pairs of face images that were retrieved from the passports of 465 individuals are presented. The verification system for faces separated by as many as 9 years, attains an equal error rate of 8.5%
- …