Improving Face Recognition Performance Using a Hierarchical Bayesian Model

Abstract

Over the past two decades, face recognition research has shot to the forefront due to its increased demand in security and commercial applications. Many facial feature extraction techniques for the purpose of recognition have been developed, some of which have also been successfully installed and used. Principal Component Analysis (PCA), also popularly called as Eigenfaces has been used successfully and also is a de facto standard. Linear generative models such as Principal Component Analysis (PCA) and Independent Component Analysis (ICA) find a set of basis images and represent the faces as a linear combination of these basis functions. These models make certain assumptions about the data which limit the type of structure they can capture. This thesis is mainly based on the hierarchical Bayesian model developed by Yan Karklin of Carnegie Mellon University. His research was mainly focused on natural signals like natural images and speech signals in which he showed that for such signals, latent variables exhibit residual dependencies and non-stationary statistics. He built his model atop ICA and this hierarchical model could capture more abstract and invariant properties of the data. We apply the same hierarchical model on facial images to extract features which can result in an improved recognition performance over already existing baseline approaches. We use Kernelized Fisher Discriminant Analysis (KFLD) as our baseline as it is superior to PCA in a way that it produces well separated classes even under variations in facial expression and lighting. We conducted extensive experiments on the GreyFERET database and tested the performance on test sets with varying facial expressions. The results demonstrate the increase in performance that was expected

    Similar works