433 research outputs found
Recommended from our members
Multimodal biometrics score level fusion using non-confidence information
Multimodal biometrics refers to automatic authentication methods that depend on multiple modalities of measurable physical characteristics. It alleviates most of the restrictions of single biometrics. To combine the multimodal biometrics scores, three different categories of fusion approaches including rule based, classification based and density based approaches are available. When choosing an approach, one has to consider not only the fusion performance, but also system requirements and other circumstances. In the context of verification, classification errors arise from samples in the overlapping region (or non- confidence region) between genuine users and impostors. In score space, a further separation of the samples outside the non-confidence region does not result in further verification improvements. Therefore, information contained in the non-confidence region might be useful for improving the fusion process. Up to this point, no attempts are reported in the literature that tries to enhance the fusion process using this additional information. In this work, the use of this information is explored in rule based and density based approaches mentioned above
Face recognition in the wild.
Research in face recognition deals with problems related to Age, Pose, Illumination and Expression (A-PIE), and seeks approaches that are invariant to these factors. Video images add a temporal aspect to the image acquisition process. Another degree of complexity, above and beyond A-PIE recognition, occurs when multiple pieces of information are known about people, which may be distorted, partially occluded, or disguised, and when the imaging conditions are totally unorthodox! A-PIE recognition in these circumstances becomes really âwildâ and therefore, Face Recognition in the Wild has emerged as a field of research in the past few years. Its main purpose is to challenge constrained approaches of automatic face recognition, emulating some of the virtues of the Human Visual System (HVS) which is very tolerant to age, occlusion and distortions in the imaging process. HVS also integrates information about individuals and adds contexts together to recognize people within an activity or behavior. Machine vision has a very long road to emulate HVS, but face recognition in the wild, using the computer, is a road to perform face recognition in that path. In this thesis, Face Recognition in the Wild is defined as unconstrained face recognition under A-PIE+; the (+) connotes any alterations to the design scenario of the face recognition system. This thesis evaluates the Biometric Optical Surveillance System (BOSS) developed at the CVIP Lab, using low resolution imaging sensors. Specifically, the thesis tests the BOSS using cell phone cameras, and examines the potential of facial biometrics on smart portable devices like iPhone, iPads, and Tablets. For quantitative evaluation, the thesis focused on a specific testing scenario of BOSS software using iPhone 4 cell phones and a laptop. Testing was carried out indoor, at the CVIP Lab, using 21 subjects at distances of 5, 10 and 15 feet, with three poses, two expressions and two illumination levels. The three steps (detection, representation and matching) of the BOSS system were tested in this imaging scenario. False positives in facial detection increased with distances and with pose angles above ± 15°. The overall identification rate (face detection at confidence levels above 80%) also degraded with distances, pose, and expressions. The indoor lighting added challenges also, by inducing shadows which affected the image quality and the overall performance of the system. While this limited number of subjects and somewhat constrained imaging environment does not fully support a âwildâ imaging scenario, it did provide a deep insight on the issues with automatic face recognition. The recognition rate curves demonstrate the limits of low-resolution cameras for face recognition at a distance (FRAD), yet it also provides a plausible defense for possible A-PIE face recognition on portable devices
Pattern Recognition
Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition
Adaptive classifier ensembles for face recognition in video-surveillance
Lors de lâimplĂ©mentation de systĂšmes de sĂ©curitĂ© tels que la vidĂ©o-surveillance intelligente, lâutilisation dâimages de visages prĂ©sente de nombreux avantages par rapport Ă dâautres traits biomĂ©triques. En particulier, cela permet de dĂ©tecter dâĂ©ventuels individus dâintĂ©rĂȘt de maniĂšre discrĂšte et non intrusive, ce qui peut ĂȘtre particuliĂšrement avantageux dans des situations comme la dĂ©tection dâindividus sur liste noire, la recherche dans des donnĂ©es archivĂ©es ou la rĂ©-identification de visages.
MalgrĂ© cela, la reconnaissance de visages reste confrontĂ©e Ă de nombreuses difficultĂ©s propres Ă la vidĂ©o surveillance. Entre autres, le manque de contrĂŽle sur lâenvironnement observĂ© implique de nombreuses variations dans les conditions dâĂ©clairage, la rĂ©solution de lâimage, le flou de mouvement, lâorientation et lâexpression des visages. Pour reconnaĂźtre des individus, des modĂšles de visages sont habituellement gĂ©nĂ©rĂ©s Ă lâaide dâun nombre limitĂ© dâimages ou de vidĂ©os de rĂ©fĂ©rence collectĂ©es lors de sessions dâinscription. Cependant, ces acquisitions ne se dĂ©roulant pas nĂ©cessairement dans les mĂȘmes conditions dâobservation, les donnĂ©es de rĂ©fĂ©rence reprĂ©sentent pas toujours la complexitĂ© du problĂšme rĂ©el. Dâautre part, bien quâil soit possible dâadapter les modĂšles de visage lorsque de nouvelles donnĂ©es de rĂ©fĂ©rence deviennent disponibles, un apprentissage incrĂ©mental basĂ© sur des donnĂ©es significativement diffĂ©rentes expose le systĂšme Ă un risque de corruption de connaissances. Enfin, seule une partie de ces connaissances est effectivement pertinente pour la classification dâune image donnĂ©e.
Dans cette thĂšse, un nouveau systĂšme est proposĂ© pour la dĂ©tection automatique dâindividus dâintĂ©rĂȘt en vidĂ©o-surveillance. Plus particuliĂšrement, celle-ci se concentre sur un scĂ©nario centrĂ© sur lâutilisateur, oĂč un systĂšme de reconnaissance de visages est intĂ©grĂ© Ă un outil dâaide Ă la dĂ©cision pour alerter un opĂ©rateur lorsquâun individu dâintĂ©rĂȘt est dĂ©tectĂ© sur des flux vidĂ©o. Un tel systĂšme se doit dâĂȘtre capable dâajouter ou supprimer des individus dâintĂ©rĂȘt durant son fonctionnement, ainsi que de mettre Ă jour leurs modĂšles de visage dans le temps avec des nouvelles donnĂ©es de rĂ©fĂ©rence. Pour cela, le systĂšme proposĂ© se base sur de la dĂ©tection de changement de concepts pour guider une stratĂ©gie dâapprentissage impliquant des ensembles de classificateurs. Chaque individu inscrit dans le systĂšme est reprĂ©sentĂ© par un ensemble de classificateurs Ă deux classes, chacun Ă©tant spĂ©cialisĂ© dans des conditions dâobservation diffĂ©rentes, dĂ©tectĂ©es dans les donnĂ©es de rĂ©fĂ©rence. De plus, une nouvelle rĂšgle pour la fusion dynamique dâensembles de classificateurs est proposĂ©e, utilisant des modĂšles de concepts pour estimer la pertinence des classificateurs vis-Ă -vis de chaque image Ă classifier. Enfin, les visages sont suivis dâune image Ă lâautre dans le but de les regrouper en trajectoires, et accumuler les dĂ©cisions dans le temps.
Au Chapitre 2, la dĂ©tection de changement de concept est dans un premier temps utilisĂ©e pour limiter lâaugmentation de complexitĂ© dâun systĂšme dâappariement de modĂšles adoptant une stratĂ©gie de mise Ă jour automatique de ses galeries. Une nouvelle approche sensible au contexte est proposĂ©e, dans laquelle seules les images de haute confiance capturĂ©es dans des conditions dâobservation diffĂ©rentes sont utilisĂ©es pour mettre Ă jour les modĂšles de visage. Des expĂ©rimentations ont Ă©tĂ© conduites avec trois bases de donnĂ©es de visages publiques. Un systĂšme dâappariement de modĂšles standard a Ă©tĂ© utilisĂ©, combinĂ© avec un module de dĂ©tection de changement dans les conditions dâillumination. Les rĂ©sultats montrent que lâapproche proposĂ©e permet de diminuer la complexitĂ© de ces systĂšmes, tout en maintenant la performance dans le temps.
Au Chapitre 3, un nouveau systĂšme adaptatif basĂ© des ensembles de classificateurs est proposĂ© pour la reconnaissance de visages en vidĂ©o-surveillance. Il est composĂ© dâun ensemble de classificateurs incrĂ©mentaux pour chaque individu inscrit, et se base sur la dĂ©tection de changement de concepts pour affiner les modĂšles de visage lorsque de nouvelles donnĂ©es sont disponibles. Une stratĂ©gie hybride est proposĂ©e, dans laquelle des classificateurs ne sont ajoutĂ©s aux ensembles que lorsquâun changement abrupt est dĂ©tectĂ© dans les donnĂ©es de rĂ©fĂ©rence. Lors dâun changement graduel, les classificateurs associĂ©s sont mis Ă jour, ce qui permet dâaffiner les connaissances propres au concept correspondant. Une implĂ©mentation particuliĂšre de ce systĂšme est proposĂ©e, utilisant des ensembles de classificateurs de type Fuzzy-ARTMAP probabilistes, gĂ©nĂ©rĂ©s et mis Ă jour Ă lâaide dâune stratĂ©gie basĂ©e sur une optimisation par essaims de particules dynamiques, et utilisant la distance de Hellinger entre histogrammes pour dĂ©tecter des changements. Les simulations rĂ©alisĂ©es sur la base de donnĂ©e de vidĂ©o-surveillance Faces in Action (FIA) montrent que le systĂšme proposĂ© permet de maintenir un haut niveau de performance dans le temps, tout en limitant la corruption de connaissance. Il montre des performances de classification supĂ©rieure Ă un systĂšme similaire passif (sans dĂ©tection de changement), ainsi quâa des systĂšmes de rĂ©fĂ©rence de type kNN probabiliste, et TCM-kNN.
Au Chapitre 4, une Ă©volution du systĂšme prĂ©sentĂ© au Chapitre 3 est proposĂ©e, intĂ©grant des mĂ©canismes permettant dâadapter dynamiquement le comportement du systĂšme aux conditions dâobservation changeantes en mode opĂ©rationnel. Une nouvelle rĂšgle de fusion basĂ©e sur de la pondĂ©ration dynamique est proposĂ©e, assignant Ă chaque classificateur un poids proportionnel Ă son niveau de compĂ©tence estimĂ© vis-Ă -vis de chaque image Ă classifier. De plus, ces compĂ©tences sont estimĂ©es Ă lâaide des modĂšles de concepts utilisĂ©s en apprentissage pour la dĂ©tection de changement, ce qui permet un allĂšgement des ressources nĂ©cessaires en mode opĂ©rationnel. Une Ă©volution de lâimplĂ©mentation proposĂ©e au Chapitre 3 est prĂ©sentĂ©e, dans laquelle les concepts sont modĂ©lisĂ©s Ă lâaide de lâalgorithme de partitionnement Fuzzy C-Means, et la fusion de classificateurs rĂ©alisĂ©e avec une moyenne pondĂ©rĂ©e. Les simulation expĂ©rimentales avec les bases de donnĂ©es de vidĂ©o-surveillance FIA et Chokepoint montrent que la mĂ©thode de fusion proposĂ©e permet dâobtenir des rĂ©sultats supĂ©rieurs Ă la mĂ©thode de sĂ©lection dynamique DSOLA, tout en utilisant considĂ©rablement moins de ressources de calcul. De plus, la mĂ©thode proposĂ©e montre des performances de classification supĂ©rieures aux systĂšmes de rĂ©fĂ©rence de type kNN probabiliste, TCM-kNN et Adaptive Sparse Coding
Advanced Biometrics with Deep Learning
Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others
- âŠ