31 research outputs found
Proof-of-Concept
Biometry is an area in great expansion and is considered as possible solution to cases where high
authentication parameters are required. Although this area is quite advanced in theoretical
terms, using it in practical terms still carries some problems. The systems available still depend
on a high cooperation level to achieve acceptable performance levels, which was the backdrop
to the development of the following project. By studying the state of the art, we propose the
creation of a new and less cooperative biometric system that reaches acceptable performance
levels.A constante necessidade de parâmetros mais elevados de segurança, nomeadamente ao nível
de autenticação, leva ao estudo biometria como possível solução. Actualmente os mecanismos
existentes nesta área tem por base o conhecimento de algo que se sabe ”password” ou algo
que se possui ”codigo Pin”. Contudo este tipo de informação é facilmente corrompida ou contornada.
Desta forma a biometria é vista como uma solução mais robusta, pois garante que a
autenticação seja feita com base em medidas físicas ou compartimentais que definem algo que
a pessoa é ou faz (”who you are” ou ”what you do”).
Sendo a biometria uma solução bastante promissora na autenticação de indivíduos, é cada vez
mais comum o aparecimento de novos sistemas biométricos. Estes sistemas recorrem a medidas
físicas ou comportamentais, de forma a possibilitar uma autenticação (reconhecimento) com
um grau de certeza bastante considerável. O reconhecimento com base no movimento do corpo
humano (gait), feições da face ou padrões estruturais da íris, são alguns exemplos de fontes
de informação em que os sistemas actuais se podem basear. Contudo, e apesar de provarem
um bom desempenho no papel de agentes de reconhecimento autónomo, ainda estão muito
dependentes a nível de cooperação exigida. Tendo isto em conta, e tudo o que já existe no
ramo do reconhecimento biometrico, esta área está a dar passos no sentido de tornar os seus
métodos o menos cooperativos poss??veis. Possibilitando deste modo alargar os seus objectivos
para além da mera autenticação em ambientes controlados, para casos de vigilância e controlo
em ambientes não cooperativos (e.g. motins, assaltos, aeroportos).
É nesta perspectiva que o seguinte projecto surge. Através do estudo do estado da arte, pretende
provar que é possível criar um sistema capaz de agir perante ambientes menos cooperativos,
sendo capaz de detectar e reconhecer uma pessoa que se apresente ao seu alcance.O
sistema proposto PAIRS (Periocular and Iris Recognition Systema) tal como nome indica, efectua
o reconhecimento através de informação extraída da íris e da região periocular (região circundante
aos olhos). O sistema é construído com base em quatro etapas: captura de dados,
pré-processamento, extração de características e reconhecimento. Na etapa de captura de
dados, foi montado um dispositivo de aquisição de imagens com alta resolução com a capacidade
de capturar no espectro NIR (Near-Infra-Red). A captura de imagens neste espectro tem
como principal linha de conta, o favorecimento do reconhecimento através da íris, visto que
a captura de imagens sobre o espectro visível seria mais sensível a variações da luz ambiente.
Posteriormente a etapa de pré-processamento implementada, incorpora todos os módulos do
sistema responsáveis pela detecção do utilizador, avaliação de qualidade de imagem e segmentação
da íris. O modulo de detecção é responsável pelo desencadear de todo o processo, uma
vez que esta é responsável pela verificação da exist?ncia de um pessoa em cena. Verificada
a sua exist?ncia, são localizadas as regiões de interesse correspondentes ? íris e ao periocular,
sendo também verificada a qualidade com que estas foram adquiridas. Concluídas estas
etapas, a íris do olho esquerdo é segmentada e normalizada. Posteriormente e com base em
vários descritores, é extraída a informação biométrica das regiões de interesse encontradas,
e é criado um vector de características biométricas. Por fim, é efectuada a comparação dos
dados biometricos recolhidos, com os já armazenados na base de dados, possibilitando a criação
de uma lista com os níveis de semelhança em termos biometricos, obtendo assim um resposta
final do sistema. Concluída a implementação do sistema, foi adquirido um conjunto de imagens capturadas através do sistema implementado, com a participação de um grupo de voluntários.
Este conjunto de imagens permitiu efectuar alguns testes de desempenho, verificar e afinar
alguns parâmetros, e proceder a optimização das componentes de extração de características e
reconhecimento do sistema. Analisados os resultados foi possível provar que o sistema proposto
tem a capacidade de exercer as suas funções perante condições menos cooperativas
UFPR-Periocular: A Periocular Dataset Collected by Mobile Devices in Unconstrained Scenarios
Recently, ocular biometrics in unconstrained environments using images
obtained at visible wavelength have gained the researchers' attention,
especially with images captured by mobile devices. Periocular recognition has
been demonstrated to be an alternative when the iris trait is not available due
to occlusions or low image resolution. However, the periocular trait does not
have the high uniqueness presented in the iris trait. Thus, the use of datasets
containing many subjects is essential to assess biometric systems' capacity to
extract discriminating information from the periocular region. Also, to address
the within-class variability caused by lighting and attributes in the
periocular region, it is of paramount importance to use datasets with images of
the same subject captured in distinct sessions. As the datasets available in
the literature do not present all these factors, in this work, we present a new
periocular dataset containing samples from 1,122 subjects, acquired in 3
sessions by 196 different mobile devices. The images were captured under
unconstrained environments with just a single instruction to the participants:
to place their eyes on a region of interest. We also performed an extensive
benchmark with several Convolutional Neural Network (CNN) architectures and
models that have been employed in state-of-the-art approaches based on
Multi-class Classification, Multitask Learning, Pairwise Filters Network, and
Siamese Network. The results achieved in the closed- and open-world protocol,
considering the identification and verification tasks, show that this area
still needs research and development
Investigation of iris recognition in the visible spectrum
mong the biometric systems that have been developed so far, iris recognition systems have emerged as being one of the most reliable. In iris recognition, most of the research was conducted on operation under near infrared illumination. For unconstrained scenarios of iris recognition systems, the iris images are captured under visible light spectrum and therefore incorporate various types of imperfections. In this thesis the merits of fusing information from various sources for improving the state of the art accuracies of colour iris recognition systems is evaluated. An investigation of how fundamentally different fusion strategies can increase the degree of choice available in achieving certain performance criteria is conducted. Initially, simple fusion mechanisms are employed to increase the accuracy of an iris recognition system and then more complex fusion architectures are elaborated to further enhance the biometric system’s accuracy. In particular, the design process of the iris recognition system with reduced constraints is carried out using three different fusion approaches: multi-algorithmic, texture and colour fusion and multiple classifier systems. In the first approach, one novel iris feature extraction methodology is proposed and a multi-algorithmic iris recognition system using score fusion, composed of 3 individual systems, is benchmarked. In the texture and colour fusion approach, the advantages of fusing information from the iris texture with data extracted from the eye colour are illustrated. Finally, the multiple classifier systems approach investigates how the robustness and practicability of an iris recognition system operating on visible spectrum images can be enhanced by training individual classifiers on different iris features. Besides the various fusion techniques explored, an iris segmentation algorithm is proposed and a methodology for finding which colour channels from a colour space reveal the most discriminant information from the iris texture is introduced. The contributions presented in this thesis indicate that iris recognition systems that operate on visible spectrum images can be designed to operate with an accuracy required by a particular application scenario. Also, the iris recognition systems developed in the present study are suitable for mobile and embedded implementations
Advanced Biometrics with Deep Learning
Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others
Multi-Modal Ocular Recognition in presence of occlusion in Mobile Devices
Title from PDF of title page viewed September 18, 2019Dissertation advisor: Reza DerakhshaniVitaIncludes bibliographical references (pages 128-144)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2018The existence eyeglasses in human faces cause real challenges for ocular, facial,
and soft-based (such as eyebrows) biometric recognition due to glasses reflection, shadow,
and frame occlusion. In this regard, two operations (eyeglasses detection and eyeglasses
segmentation) have been proposed to mitigate the effect of occlusion using eyeglasses.
Eyeglasses detection is an important initial step towards eyeglass segmentation.
Three schemes of eye glasses detection have been proposed which are non-learning-based,
learning-based, and deep learning-based schemes. The non-learning scheme of eyeglasses
detection which consists of cascaded filters achieved an overall accuracy of 99.0% for VI
SOB and 97.9% for FERET datasets. The learning-based scheme of eyeglass detection
consisting of extracting Local Binary Pattern (LBP), Histogram of Gradients (HOG) and
fusing them together, then applying classifiers (such as Support Vector Machine (SVM),
Multi-Layer Perceptron (MLP), and Linear Discriminant Analysis (LDA)), and fusing the
output of these classifiers. The latter obtained a best overall accuracy of about 99.3% on
FERET and 100% on VISOB dataset. Besides, the deep learning-based scheme of eye
glasses detection showed a comparative study for eyeglasses frame detection using different Convolutional Neural Network (CNN) structures that are applied to Frame Bridge
region and extended ocular region. The best CNN model obtained an overall accuracy of
99.96% for ROI consisting of Frame Bridge.
Moreover, two schemes of eyeglasses segmentation have been introduced. The
first segmentation scheme was cascaded convolutional Neural Network (CNN). This scheme
consists of cascaded CNN’s for eyeglasses detection, weight generation, and glasses segmentation, followed by mathematical and binarization operations. The scheme showed
a 100% eyeglasses detection and 91% segmentation accuracy by our proposed approach.
Also, the second segmentation scheme was the convolutional de-convolutional network.
This CNN model has been implemented with main convolutional layers, de-convolutional
layers, and one custom (lamda) layer. This scheme achieved better segmentation results
of 97% segmentation accuracy over the cascaded approach.
Furthermore, two soft biometric re-identification schemes have been introduced
with eyeglasses mitigation. The first scheme was eyebrows-based user authentication
consists of local, global, deep feature extraction with learning-based matching. The best
result of 0.63% EER using score level fusion of handcraft descriptors (HOG, and GIST)
with the deep VGG16 descriptor for eyebrow-based user authentication. The second
scheme was eyeglass-based user authentication which consisting of eyeglasses segmentation, morphological cleanup, features extraction, and learning-based matching. The best
result of 3.44% EER using score level fusion of handcraft descriptors (HOG, and GIST)
with the deep VGG16 descriptor for eyeglasses-based user authentication.
Also, an EER enhancement of 2.51% for indoor vs. outdoor (In: Out) light set
tings was achieved for eyebrow-based authentication after eyeglasses segmentation and
removal using Convolutional-Deconvolutional approach followed by in-painting.Introduction -- Background in machine learning and computer vision -- Eyeglasses detection and segmentation -- User authentication using soft-biometric -- Conclusion and future work -- Appendi
Eye center localization and gaze gesture recognition for human-computer interaction
© 2016 Optical Society of America. This paper introduces an unsupervised modular approach for accurate and real-time eye center localization in images and videos, thus allowing a coarse-to-fine, global-to-regional scheme. The trajectories of eye centers in consecutive frames, i.e., gaze gestures, are further analyzed, recognized, and employed to boost the human-computer interaction (HCI) experience. This modular approach makes use of isophote and gradient features to estimate the eye center locations. A selective oriented gradient filter has been specifically designed to remove strong gradients from eyebrows, eye corners, and shadows, which sabotage most eye center localization methods. A real-world implementation utilizing these algorithms has been designed in the form of an interactive advertising billboard to demonstrate the effectiveness of our method for HCI. The eye center localization algorithm has been compared with 10 other algorithms on the BioID database and six other algorithms on the GI4E database. It outperforms all the other algorithms in comparison in terms of localization accuracy. Further tests on the extended Yale Face Database b and self-collected data have proved this algorithm to be robust against moderate head poses and poor illumination conditions. The interactive advertising billboard has manifested outstanding usability and effectiveness in our tests and shows great potential for benefiting a wide range of real-world HCI applications
Advancing the technology of sclera recognition
PhD ThesisEmerging biometric traits have been suggested recently to overcome
some challenges and issues related to utilising traditional human
biometric traits such as the face, iris, and fingerprint. In particu-
lar, iris recognition has achieved high accuracy rates under Near-
InfraRed (NIR) spectrum and it is employed in many applications for
security and identification purposes. However, as modern imaging
devices operate in the visible spectrum capturing colour images, iris
recognition has faced challenges when applied to coloured images
especially with eye images which have a dark pigmentation. Other
issues with iris recognition under NIR spectrum are the constraints on
the capturing process resulting in failure-to-enrol, and degradation in
system accuracy and performance. As a result, the research commu-
nity investigated using other traits to support the iris biometric in the
visible spectrum such as the sclera.
The sclera which is commonly known as the white part of the eye
includes a complex network of blood vessels and veins surrounding
the eye. The vascular pattern within the sclera has different formations
and layers providing powerful features for human identification. In
addition, these blood vessels can be acquired in the visible spectrum
and thus can be applied using ubiquitous camera-based devices. As a
consequence, recent research has focused on developing sclera recog-
nition. However, sclera recognition as any biometric system has issues
and challenges which need to be addressed. These issues are mainly
related to sclera segmentation, blood vessel enhancement, feature ex-
traction, template registration, matching and decision methods. In
addition, employing the sclera biometric in the wild where relaxed
imaging constraints are utilised has introduced more challenges such
as illumination variation, specular reflections, non-cooperative user
capturing, sclera blocked region due to glasses and eyelashes, variation
in capturing distance, multiple gaze directions, and eye rotation.
The aim of this thesis is to address such sclera biometric challenges
and highlight the potential of this trait. This also might inspire further
research on tackling sclera recognition system issues. To overcome the
vii
above-mentioned issues and challenges, three major contributions are
made which can be summarised as 1) designing an efficient sclera
recognition system under constrained imaging conditions which in-
clude new sclera segmentation, blood vessel enhancement, vascular
binary network mapping and feature extraction, and template registra-
tion techniques; 2) introducing a novel sclera recognition system under
relaxed imaging constraints which exploits novel sclera segmentation,
sclera template rotation alignment and distance scaling methods, and
complex sclera features; 3) presenting solutions to tackle issues related
to applying sclera recognition in a real-time application such as eye
localisation, eye corner and gaze detection, together with a novel image
quality metric.
The evaluation of the proposed contributions is achieved using five
databases having different properties representing various challenges
and issues. These databases are the UBIRIS.v1, UBIRIS.v2, UTIRIS,
MICHE, and an in-house database. The results in terms of segmen-
tation accuracy, Equal Error Rate (EER), and processing time show
significant improvement in the proposed systems compared to state-
of-the-art methods.Ministry of Higher Education and
Scientific Research in Iraq and the Iraqi Cultural Attach´e in Londo
Impact and Detection of Facial Beautification in Face Recognition: An Overview
International audienceFacial beautification induced by plastic surgery, cosmetics or retouching has the ability to substantially alter the appearance of face images. Such types of beautification can negatively affect the accuracy of face recognition systems. In this work, a conceptual categorisation of beautification is presented, relevant scenarios with respect to face recognition are discussed, and related publications are revisited. Additionally, technical considerations and trade-offs of the surveyed methods are summarized along with open issues and challenges in the field. This survey is targeted to provide a comprehensive point of reference for biometric researchers and practitioners working in the field of face recognition, who aim at tackling challenges caused by facial beautification