199 research outputs found

    Generic multimodal biometric fusion

    Get PDF
    Biometric systems utilize physiological or behavioral traits to automatically identify individuals. A unimodal biometric system utilizes only one source of biometric information and suffers from a variety of problems such as noisy data, intra-class variations, restricted degrees of freedom, non-universality, spoof attacks and unacceptable error rates. Multimodal biometrics refers to a system which utilizes multiple biometric information sources and can overcome some of the limitation of unimodal system. Biometric information can be combined at 4 different levels: (i) Raw data level; (ii) Feature level; (iii) Match-score level; and (iv) Decision level. Match score fusion and decision fusion have received significant attention due to convenient information representation and raw data fusion is extremely challenging due to large diversity of representation. Feature level fusion provides a good trade-off between fusion complexity and loss of information due to subsequent processing. This work presents generic feature information fusion techniques for fusion of most of the commonly used feature representation schemes. A novel concept of Local Distance Kernels is introduced to transform the available information into an arbitrary common distance space where they can be easily fused together. Also, a new dynamic learnable noise removal scheme based on thresholding is used to remove shot noise in the distance vectors. Finally we propose the use of AdaBoost and Support Vector Machines for learning the fusion rules to obtain highly reliable final matching scores from the transformed local distance vectors. The integration of the proposed methods leads to large performance improvement over match-score or decision level fusion

    Adaptive visual sampling

    Get PDF
    PhDVarious visual tasks may be analysed in the context of sampling from the visual field. In visual psychophysics, human visual sampling strategies have often been shown at a high-level to be driven by various information and resource related factors such as the limited capacity of the human cognitive system, the quality of information gathered, its relevance in context and the associated efficiency of recovering it. At a lower-level, we interpret many computer vision tasks to be rooted in similar notions of contextually-relevant, dynamic sampling strategies which are geared towards the filtering of pixel samples to perform reliable object association. In the context of object tracking, the reliability of such endeavours is fundamentally rooted in the continuing relevance of object models used for such filtering, a requirement complicated by realworld conditions such as dynamic lighting that inconveniently and frequently cause their rapid obsolescence. In the context of recognition, performance can be hindered by the lack of learned context-dependent strategies that satisfactorily filter out samples that are irrelevant or blunt the potency of models used for discrimination. In this thesis we interpret the problems of visual tracking and recognition in terms of dynamic spatial and featural sampling strategies and, in this vein, present three frameworks that build on previous methods to provide a more flexible and effective approach. Firstly, we propose an adaptive spatial sampling strategy framework to maintain statistical object models for real-time robust tracking under changing lighting conditions. We employ colour features in experiments to demonstrate its effectiveness. The framework consists of five parts: (a) Gaussian mixture models for semi-parametric modelling of the colour distributions of multicolour objects; (b) a constructive algorithm that uses cross-validation for automatically determining the number of components for a Gaussian mixture given a sample set of object colours; (c) a sampling strategy for performing fast tracking using colour models; (d) a Bayesian formulation enabling models of object and the environment to be employed together in filtering samples by discrimination; and (e) a selectively-adaptive mechanism to enable colour models to cope with changing conditions and permit more robust tracking. Secondly, we extend the concept to an adaptive spatial and featural sampling strategy to deal with very difficult conditions such as small target objects in cluttered environments undergoing severe lighting fluctuations and extreme occlusions. This builds on previous work on dynamic feature selection during tracking by reducing redundancy in features selected at each stage as well as more naturally balancing short-term and long-term evidence, the latter to facilitate model rigidity under sharp, temporary changes such as occlusion whilst permitting model flexibility under slower, long-term changes such as varying lighting conditions. This framework consists of two parts: (a) Attribute-based Feature Ranking (AFR) which combines two attribute measures; discriminability and independence to other features; and (b) Multiple Selectively-adaptive Feature Models (MSFM) which involves maintaining a dynamic feature reference of target object appearance. We call this framework Adaptive Multi-feature Association (AMA). Finally, we present an adaptive spatial and featural sampling strategy that extends established Local Binary Pattern (LBP) methods and overcomes many severe limitations of the traditional approach such as limited spatial support, restricted sample sets and ad hoc joint and disjoint statistical distributions that may fail to capture important structure. Our framework enables more compact, descriptive LBP type models to be constructed which may be employed in conjunction with many existing LBP techniques to improve their performance without modification. The framework consists of two parts: (a) a new LBP-type model known as Multiscale Selected Local Binary Features (MSLBF); and (b) a novel binary feature selection algorithm called Binary Histogram Intersection Minimisation (BHIM) which is shown to be more powerful than established methods used for binary feature selection such as Conditional Mutual Information Maximisation (CMIM) and AdaBoost

    Facial emotion recognition using min-max similarity classifier

    Full text link
    Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature selection and classification technique for emotion recognition is still an open problem. In this paper, we propose an efficient and straightforward facial emotion recognition algorithm to reduce the problem of inter-class pixel mismatch during classification. The proposed method includes the application of pixel normalization to remove intensity offsets followed-up with a Min-Max metric in a nearest neighbor classifier that is capable of suppressing feature outliers. The results indicate an improvement of recognition performance from 92.85% to 98.57% for the proposed Min-Max classification method when tested on JAFFE database. The proposed emotion recognition technique outperforms the existing template matching methods

    Iris Indexing and Ear Classification

    Get PDF
    To identify an individual using a biometric system, the input biometric data has to be typically compared against that of each and every identity in the existing database during the matching stage. The response time of the system increases with the increase in number of individuals (i.e., database size), which is not acceptable in real time monitoring or when working on large scale data. This thesis addresses the problem of reducing the number of database candidates to be considered during matching in the context of iris and ear recognition. In the case of iris, an indexing mechanism based on Burrows Wheeler Transform (BWT) is proposed. Experiments on the CASIA version 3 iris database show a significant reduction in both search time and search space, suggesting the potential of this scheme for indexing iris databases. The ear classification scheme proposed in the thesis is based on parameterizing the shape of the ear and assigning it to one of four classes: round, rectangle, oval and triangle. Experiments on the MAGNA database suggest the potential of this scheme for classifying ear databases

    Robust recognition of facial expressions on noise degraded facial images

    Get PDF
    Magister Scientiae - MScWe investigate the use of noise degraded facial images in the application of facial expression recognition. In particular, we trained Gabor+SVMclassifiers to recognize facial expressions images with various types of noise. We applied Gaussian noise, Poisson noise, varying levels of salt and pepper noise, and speckle noise to noiseless facial images. Classifiers were trained with images without noise and then tested on the images with noise. Next, the classifiers were trained using images with noise, and then on tested both images that had noise, and images that were noiseless. Finally, classifiers were tested on images while increasing the levels of salt and pepper in the test set. Our results reflected distinct degradation of recognition accuracy. We also discovered that certain types of noise, particularly Gaussian and Poisson noise, boost recognition rates to levels greater than would be achieved by normal, noiseless images. We attribute this effect to the Gaussian envelope component of Gabor filters being sympathetic to Gaussian-like noise, which is similar in variance to that of the Gabor filters. Finally, using linear regression, we mapped a mathematical model to this degradation and used it to suggest how recognition rates would degrade further should more noise be added to the images.South Afric

    Fusion Iris and Periocular Recognitions in Non-Cooperative Environment

    Get PDF
    The performance of iris recognition in non-cooperative environment can be negatively impacted when the resolution of the iris images is low which results in failure to determine the eye center, limbic and pupillary boundary of the iris segmentation. Hence, a combination with periocular features is suggested to increase the authenticity of the recognition system. However, the texture feature of periocular can be easily affected by a background complication while the colour feature of periocular is still limited to spatial information and quantization effects. This happens due to different distances between the sensor and the subject during the iris acquisition stage as well as image size and orientation. The proposed method of periocular feature extraction consists of a combination of rotation invariant uniform local binary pattern to select the texture features and a method of color moment to select the color features. Besides, a hue-saturation-value channel is selected to avoid loss of discriminative information in the eye image. The proposed method which consists of combination between texture and colour features provides the highest accuracy for the periocular recognition with more than 71.5% for the UBIRIS.v2 dataset and 85.7% for the UBIPr dataset. For the fusion recognitions, the proposed method achieved the highest accuracy with more than 85.9% for the UBIRIS.v2 dataset and 89.7% for the UBIPr dataset

    Investigation of iris recognition in the visible spectrum

    Get PDF
    mong the biometric systems that have been developed so far, iris recognition systems have emerged as being one of the most reliable. In iris recognition, most of the research was conducted on operation under near infrared illumination. For unconstrained scenarios of iris recognition systems, the iris images are captured under visible light spectrum and therefore incorporate various types of imperfections. In this thesis the merits of fusing information from various sources for improving the state of the art accuracies of colour iris recognition systems is evaluated. An investigation of how fundamentally different fusion strategies can increase the degree of choice available in achieving certain performance criteria is conducted. Initially, simple fusion mechanisms are employed to increase the accuracy of an iris recognition system and then more complex fusion architectures are elaborated to further enhance the biometric system’s accuracy. In particular, the design process of the iris recognition system with reduced constraints is carried out using three different fusion approaches: multi-algorithmic, texture and colour fusion and multiple classifier systems. In the first approach, one novel iris feature extraction methodology is proposed and a multi-algorithmic iris recognition system using score fusion, composed of 3 individual systems, is benchmarked. In the texture and colour fusion approach, the advantages of fusing information from the iris texture with data extracted from the eye colour are illustrated. Finally, the multiple classifier systems approach investigates how the robustness and practicability of an iris recognition system operating on visible spectrum images can be enhanced by training individual classifiers on different iris features. Besides the various fusion techniques explored, an iris segmentation algorithm is proposed and a methodology for finding which colour channels from a colour space reveal the most discriminant information from the iris texture is introduced. The contributions presented in this thesis indicate that iris recognition systems that operate on visible spectrum images can be designed to operate with an accuracy required by a particular application scenario. Also, the iris recognition systems developed in the present study are suitable for mobile and embedded implementations

    Ekstraksi Ciri pada Citra Iris Menggunakan Gabor 2-D

    Get PDF
    Abstrak – Iris merupakan salah satu bagian tubuh manusia yang sering digunakan dalam sistem pengenalan biometrik karena tingkat perbedaannya yang tinggi. Ektraksi ciri merupakan salah satu tahapan yang dilalui dalam pengembangan sistem pengenalan biometrik iris. Tahap ini bertujuan untuk mengekstrak informasi dari citra iris yang telah disegmentasi sehingga dapat digunakan sebagai ciri unik dari iris bersangkutan. Pada paper ini tahap ekstraksi ciri dilakukan dengan menggunakan tapis Gabor 2-D. Tapis tersebut digunakan karena mampu menyediakan representasi gabungan yang optimal dari sinyal dalam domain spasial dan frekuensi. Hasil penerapan tapis Gabor 2-D didemodulasi dengan menggunakan quadrature Gabor 2-D untuk menghasilkan iris code yang dijadikan sebagai informasi pembeda (fitur ciri) iris. Hasil uji coba pada penelitian ini menghasilkan fitur iris terbaik ketika ukuran tapis yang digunakan adalah 33×33. Sudut orientasi yang digunakan untuk fitur real dan imaginary adalah -45º, 0º, 45º, dan 90º
    corecore