56 research outputs found
Artificial Vision Algorithms for Socially Assistive Robot Applications: A Review of the Literature
Today, computer vision algorithms are very important for different fields and applications, such as closed-circuit television security, health status monitoring, and recognizing a specific person or object and robotics. Regarding this topic, the present paper deals with a recent review of the literature on computer vision algorithms (recognition and tracking of faces, bodies, and objects) oriented towards socially assistive robot applications. The performance, frames per second (FPS) processing speed, and hardware implemented to run the algorithms are highlighted by comparing the available solutions. Moreover, this paper provides general information for researchers interested in knowing which vision algorithms are available, enabling them to select the one that is most suitable to include in their robotic system applicationsBeca Conacyt Doctorado No de CVU: 64683
A generic face processing framework: technologies, analyses and applications.
Jang Kim-fung.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 108-124).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- Introduction about Face Processing Framework --- p.4Chapter 1.2.1 --- Basic architecture --- p.4Chapter 1.2.2 --- Face detection --- p.5Chapter 1.2.3 --- Face tracking --- p.6Chapter 1.2.4 --- Face recognition --- p.6Chapter 1.3 --- The scope and contributions of the thesis --- p.7Chapter 1.4 --- The outline of the thesis --- p.8Chapter 2 --- Facial Feature Representation --- p.10Chapter 2.1 --- Facial feature analysis --- p.10Chapter 2.1.1 --- Pixel information --- p.11Chapter 2.1.2 --- Geometry information --- p.13Chapter 2.2 --- Extracting and coding of facial feature --- p.14Chapter 2.2.1 --- Face recognition --- p.15Chapter 2.2.2 --- Facial expression classification --- p.38Chapter 2.2.3 --- Other related work --- p.44Chapter 2.3 --- Discussion about facial feature --- p.48Chapter 2.3.1 --- Performance evaluation for face recognition --- p.49Chapter 2.3.2 --- Evolution of the face recognition --- p.52Chapter 2.3.3 --- Evaluation of two state-of-the-art face recog- nition methods --- p.53Chapter 2.4 --- Problem for current situation --- p.58Chapter 3 --- Face Detection Algorithms and Committee Ma- chine --- p.61Chapter 3.1 --- Introduction about face detection --- p.62Chapter 3.2 --- Face Detection Committee Machine --- p.64Chapter 3.2.1 --- Review of three approaches for committee machine --- p.65Chapter 3.2.2 --- The approach of FDCM --- p.68Chapter 3.3 --- Evaluation --- p.70Chapter 4 --- Facial Feature Localization --- p.73Chapter 4.1 --- Algorithm for gray-scale image: template match- ing and separability filter --- p.73Chapter 4.1.1 --- Position of face and eye region --- p.74Chapter 4.1.2 --- Position of irises --- p.75Chapter 4.1.3 --- Position of lip --- p.79Chapter 4.2 --- Algorithm for color image: eyemap and separa- bility filter --- p.81Chapter 4.2.1 --- Position of eye candidates --- p.81Chapter 4.2.2 --- Position of mouth candidates --- p.83Chapter 4.2.3 --- Selection of face candidates by cost function --- p.84Chapter 4.3 --- Evaluation --- p.85Chapter 4.3.1 --- Algorithm for gray-scale image --- p.86Chapter 4.3.2 --- Algorithm for color image --- p.88Chapter 5 --- Face Processing System --- p.92Chapter 5.1 --- System architecture and limitations --- p.92Chapter 5.2 --- Pre-processing module --- p.93Chapter 5.2.1 --- Ellipse color model --- p.94Chapter 5.3 --- Face detection module --- p.96Chapter 5.3.1 --- Choosing the classifier --- p.96Chapter 5.3.2 --- Verifying the candidate region --- p.97Chapter 5.4 --- Face tracking module --- p.99Chapter 5.4.1 --- Condensation algorithm --- p.99Chapter 5.4.2 --- Tracking the region using Hue color model --- p.101Chapter 5.5 --- Face recognition module --- p.102Chapter 5.5.1 --- Normalization --- p.102Chapter 5.5.2 --- Recognition --- p.103Chapter 5.6 --- Applications --- p.104Chapter 6 --- Conclusion --- p.106Bibliography --- p.10
Biometric Recognition of 3D Faces
Diplomová práce byla vypracována na studijním pobytu na "Gjovik University College" v Norsku, a je zpracována v angličtině. Tato práce se zabývá rozpoznáváním 3D obličejů. Je zde popsán obecný biometrický systém a také konkrétní postupy používané při rozpoznávání 2D i 3D obličejů. Následně je navžena metoda pro rozpoznávání 3D obličejů. Algoritmus je vyvíjen a testován pomocí databáze Face Recognition Grand Challenge (FRGC). Během předzpracování jsou nalezeny význačné body v obličeji a následně je trojrozměrný model zarovnán do referenční polohy. Dále jsou vstupní data porovnávána s biometrickými šablonami uloženými v databázi, a to je zajištěno využitím tří základních technik pro rozpoznávání obličejů -- metoda eigenface (PCA), rozpoznávání založené na histogramu obličeje a rozpoznávání založené na anatomických rysech. Nakonec jsou jednotlivé metody spojeny do jednoho systému, jehož celková výsledná výkonnost převyšuje výkonnost jednotlivých použitých technik.This Master's Thesis was performed during a study stay at the Gjovik University College, Norway. This Thesis is about biometric 3D face recognition. A general biometric system as well as specific techniques used in 2D and 3D face recognition are described. An automatic modular 3D face recognition method will be proposed. The algorithm is developed, tested and evaluated on the Face Recognition Grand Challenge (FRGC) database. During the preprocessing part, facial landmarks are located on the face surface and the three dimensional model is aligned to a predefined position. In the comparison module, the input probe scan is compared to the gallery template. There are three fundamental face recognition algorithms employed during the recognition pipeline -- the eigenface method (PCA), the recognition using histogram-based features, and the recognition based on the anatomical-Bertillon features of the face. Finally the decision module fuses the scores provided by the utilized recognition techniques. The resulting performance is better than any of utilized recognition algorithms.
Unfamiliar facial identity registration and recognition performance enhancement
The work in this thesis aims at studying the problems related to the robustness of a face recognition system where specific attention is given to the issues of handling the image variation complexity and inherent limited Unique Characteristic Information (UCI) within the scope of unfamiliar identity recognition environment. These issues will be the main themes in developing a mutual understanding of extraction and classification tasking strategies and are carried out as a two interdependent but related blocks of research work.
Naturally, the complexity of the image variation problem is built up from factors including the viewing geometry, illumination, occlusion and other kind of intrinsic and extrinsic image variation. Ideally, the recognition performance will be increased whenever the variation is reduced and/or the UCI is increased. However, the variation reduction on 2D facial images may result in loss of important clues or UCI data for a particular face alternatively increasing the UCI may also increase the image variation.
To reduce the lost of information, while reducing or compensating the variation complexity, a hybrid technique is proposed in this thesis. The technique is derived from three conventional approaches for the variation compensation and feature extraction tasks. In this first research block, transformation, modelling and compensation approaches are combined to deal with the variation complexity. The ultimate aim of this combination is to represent (transformation) the UCI without losing the important features by modelling and discard (compensation) and reduce the level of the variation complexity of a given face image. Experimental results have shown that discarding a certain obvious variation will enhance the desired information rather than sceptical in losing the interested UCI. The modelling and compensation stages will benefit both variation reduction and UCI enhancement. Colour, gray level and edge image information are used to manipulate the UCI which involve the analysis on the skin colour, facial texture and features measurement respectively. The Derivative Linear Binary transformation (DLBT) technique is proposed for the features measurement consistency. Prior knowledge of input image with symmetrical properties, the informative region and consistency of some features will be fully utilized in preserving the UCI feature information. As a result, the similarity and dissimilarity representation for identity parameters or classes are obtained from the selected UCI representation which involves the derivative features size and distance measurement, facial texture and skin colour. These are mainly used to accommodate the strategy of unfamiliar identity classification in the second block of the research work.
Since all faces share similar structure, classification technique should be able to increase the similarities within the class while increase the dissimilarity between the classes. Furthermore, a smaller class will result on less burden on the identification or recognition processes. The proposed method or collateral classification strategy of identity representation introduced in this thesis is by manipulating the availability of the collateral UCI for classifying the identity parameters of regional appearance, gender and age classes. In this regard, the registration of collateral UCI s have been made in such a way to collect more identity information. As a result, the performance of unfamiliar identity recognition positively is upgraded with respect to the special UCI for the class recognition and possibly with the small size of the class. The experiment was done using data from our developed database and open database comprising three different regional appearances, two different age groups and two different genders and is incorporated with pose and illumination image variations
DRUBIS : a distributed face-identification experimentation framework - design, implementation and performance issues
We report on the design, implementation and performance issues of the DRUBIS (Distributed Rhodes University Biometric Identification System) experimentation framework. The Principal Component Analysis (PCA) face-recognition approach is used as a case study. DRUBIS is a flexible experimentation framework, distributed over a number of modules that are easily pluggable and swappable, allowing for the easy construction of prototype systems. Web services are the logical means of distributing DRUBIS components and a number of prototype applications have been implemented from this framework. Different popular PCA face-recognition related experiments were used to evaluate our experimentation framework. We extract recognition performance measures from these experiments. In particular, we use the framework for a more indepth study of the suitability of the DFFS (Difference From Face Space) metric as a means for image classification in the area of race and gender determination
Recommended from our members
Evaluation and analysis of hybrid intelligent pattern recognition techniques for speaker identification
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The rapid momentum of the technology progress in the recent years has led to a tremendous rise in the use of biometric authentication systems. The objective of this research is to investigate the problem
of identifying a speaker from its voice regardless of the content (i.e.
text-independent), and to design efficient methods of combining face and voice in producing a robust authentication system.
A novel approach towards speaker identification is developed using
wavelet analysis, and multiple neural networks including Probabilistic
Neural Network (PNN), General Regressive Neural Network (GRNN)and Radial Basis Function-Neural Network (RBF NN) with the AND
voting scheme. This approach is tested on GRID and VidTIMIT cor-pora and comprehensive test results have been validated with state-
of-the-art approaches. The system was found to be competitive and it improved the recognition rate by 15% as compared to the classical Mel-frequency Cepstral Coe±cients (MFCC), and reduced the recognition time by 40% compared to Back Propagation Neural Network (BPNN), Gaussian Mixture Models (GMM) and Principal Component Analysis (PCA).
Another novel approach using vowel formant analysis is implemented using Linear Discriminant Analysis (LDA). Vowel formant based speaker identification is best suitable for real-time implementation and requires only a few bytes of information to be stored for each speaker, making it both storage and time efficient. Tested on GRID and Vid-TIMIT, the proposed scheme was found to be 85.05% accurate when Linear Predictive Coding (LPC) is used to extract the vowel formants, which is much higher than the accuracy of BPNN and GMM. Since the proposed scheme does not require any training time other than creating a small database of vowel formants, it is faster as well. Furthermore, an increasing number of speakers makes it di±cult for BPNN and GMM to sustain their accuracy, but the proposed score-based methodology stays almost linear.
Finally, a novel audio-visual fusion based identification system is implemented using GMM and MFCC for speaker identi¯cation and PCA for face recognition. The results of speaker identification and face recognition are fused at different levels, namely the feature, score and decision levels. Both the score-level and decision-level (with OR voting) fusions were shown to outperform the feature-level fusion in terms of accuracy and error resilience. The result is in line with the distinct nature of the two modalities which lose themselves when combined at the feature-level. The GRID and VidTIMIT test results validate that
the proposed scheme is one of the best candidates for the fusion of
face and voice due to its low computational time and high recognition accuracy
Human face detection techniques: A comprehensive review and future research directions
Face detection which is an effortless task for humans are complex to perform on machines. Recent veer proliferation of computational resources are paving the way for a frantic advancement of face detection technology. Many astutely developed algorithms have been proposed to detect faces. However, there is a little heed paid in making a comprehensive survey of the available algorithms. This paper aims at providing fourfold discussions on face detection algorithms. At first, we explore a wide variety of available face detection algorithms in five steps including history, working procedure, advantages, limitations, and use in other fields alongside face detection. Secondly, we include a comparative evaluation among different algorithms in each single method. Thirdly, we provide detailed comparisons among the algorithms epitomized to have an all inclusive outlook. Lastly, we conclude this study with several promising research directions to pursue. Earlier survey papers on face detection algorithms are limited to just technical details and popularly used algorithms. In our study, however, we cover detailed technical explanations of face detection algorithms and various recent sub-branches of neural network. We present detailed comparisons among the algorithms in all-inclusive and also under sub-branches. We provide strengths and limitations of these algorithms and a novel literature survey including their use besides face detection
Towards an efficient, unsupervised and automatic face detection system for unconstrained environments
Nowadays, there is growing interest in face detection applications for unconstrained environments. The increasing need for public security and national security motivated our research on the automatic face detection system. For public security surveillance applications, the face detection system must be able to cope with unconstrained environments, which includes cluttered background and complicated illuminations. Supervised approaches give very good results on constrained environments, but when it comes to unconstrained environments, even obtaining all the training samples needed is sometimes impractical. The limitation of supervised approaches impels us to turn to unsupervised approaches. In this thesis, we present an efficient and unsupervised face detection system, which is feature and configuration based. It combines geometric feature detection and local appearance feature extraction to increase stability and performance of the detection process. It also contains a novel adaptive lighting compensation approach to normalize the complicated illumination in real life environments. We aim to develop a system that has as few assumptions as possible from the very beginning, is robust and exploits accuracy/complexity trade-offs as much as possible. Although our attempt is ambitious for such an ill posed problem-we manage to tackle it in the end with very few assumptions.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
- …