368 research outputs found
A Multimodal and Multi-Algorithmic Architecture for Data Fusion in Biometric Systems
Software di autenticazione basato su tratti biometric
???????????? ????????? ?????? ?????? ?????? ?????? ?????? ??????
Department of Electrical EngineeringBiometrics such as fingerprint, iris, face, and electrocardiogram (ECG) have been investigated as convenient and powerful security tools that can potentially replace or supplement current possession or knowledge based authentication schemes. Recently, multi-spectral skin photomatrix (MSP) has been newly found as one of the biometrics. Moreover, since the interest of usage and security for wearable devices have been increasing, multi-modal biometrics authentication which is combining more than two modalities such as (iris + face) or (iris + fingerprint) for powerful and convenience authentication is widely proposed.
However, one practical drawback of biometrics is irrevocability. Unlike password, biometrics can not be canceled and re-used once compromised since they are not changed forever. There have been several works on cancelable biometrics to overcome this drawback. ECG has been investigated as a promising biometrics, but there are few research on cancelable ECG biometrics.
As we aim to study a way for multi-modal biometric scheme for wearable devices that is assumed circumstance under some limitations such as relatively high performance, low computing power, and limited information (not sharing users information to the public), in this study, we proposed a multi-modal biometrics authentication by combining ECG and MSP. For investigating the performances versus level of fusions, Adaboost algorithm was studied as a score level fusion method, and Majority Voting was studied as a decision level fusion method. Due to ECG signal is 1 dimensional, it provides benefits in wearable devices for overcoming the computing memory limitation. The reasons that we select MSP combination with ECG are it can be collected by measuring on inner-wrist of human body and it also can be considered as hardly stolen modality in remote ways.
For proposed multi-modal biometrics, We evaluate our methods using collected data by Brain-Computer-Interface lab with 63 subjects. Our Adaboost based pro- posed multi modal biometrics method with performance boost yielded 99.7% detection probability at 0.1% false alarm ratio (PD0.1) and 0.3% equal error rate (EER), which are far better than simply combining by Majority Voting algorithm with 21.5% PD0.1 and 1.6% EER. Note that for training the Adaboost algorithm, we used only 9 people dataset which is assumed as public data and not included for testing data set, against for knowledge limitation as the other constraint.
As initial step for user template protection, We proposed a cancelable ECG based user authentication using a composite hypothesis testing in compressive sensing do- main by deriving a generalized likelihood ratio test (GLRT) detector. We also pro- posed two performance boost tricks in compressive sensing domain to compensate for performance degradation due to cancelable schemes: user template guided filtering and T-wave shift model based GLRT detector for random projection domain. To verify our proposed method, we investigated cancelable biometrics criteria for the proposed methods to confirm that the proposed algorithms are indeed cancelable.
For proposed cancelable ECG authentication, We evaluated our proposed methods using ECG data with 147 subjects from three public ECG data sets (ECG-ID, MIT- BIH Normal / Arrhythmia). Our proposed cancelable ECG authentication method is practically cancelable by satisfying all cancelable biometrics criteria. Moreover, our proposed method with performance boost tricks achieved 97.1% detection probability at 1% false alarm ratio (PD1) and 1.9% equal error rate (EER), which are even better than non-cancelable baseline with 94.4% PD1 and 3.1% EER for single pulse ECG authentication.ope
Proof-of-Concept
Biometry is an area in great expansion and is considered as possible solution to cases where high
authentication parameters are required. Although this area is quite advanced in theoretical
terms, using it in practical terms still carries some problems. The systems available still depend
on a high cooperation level to achieve acceptable performance levels, which was the backdrop
to the development of the following project. By studying the state of the art, we propose the
creation of a new and less cooperative biometric system that reaches acceptable performance
levels.A constante necessidade de parâmetros mais elevados de segurança, nomeadamente ao nível
de autenticação, leva ao estudo biometria como possível solução. Actualmente os mecanismos
existentes nesta área tem por base o conhecimento de algo que se sabe ”password” ou algo
que se possui ”codigo Pin”. Contudo este tipo de informação é facilmente corrompida ou contornada.
Desta forma a biometria é vista como uma solução mais robusta, pois garante que a
autenticação seja feita com base em medidas físicas ou compartimentais que definem algo que
a pessoa é ou faz (”who you are” ou ”what you do”).
Sendo a biometria uma solução bastante promissora na autenticação de indivíduos, é cada vez
mais comum o aparecimento de novos sistemas biométricos. Estes sistemas recorrem a medidas
físicas ou comportamentais, de forma a possibilitar uma autenticação (reconhecimento) com
um grau de certeza bastante considerável. O reconhecimento com base no movimento do corpo
humano (gait), feições da face ou padrões estruturais da íris, são alguns exemplos de fontes
de informação em que os sistemas actuais se podem basear. Contudo, e apesar de provarem
um bom desempenho no papel de agentes de reconhecimento autónomo, ainda estão muito
dependentes a nível de cooperação exigida. Tendo isto em conta, e tudo o que já existe no
ramo do reconhecimento biometrico, esta área está a dar passos no sentido de tornar os seus
métodos o menos cooperativos poss??veis. Possibilitando deste modo alargar os seus objectivos
para além da mera autenticação em ambientes controlados, para casos de vigilância e controlo
em ambientes não cooperativos (e.g. motins, assaltos, aeroportos).
É nesta perspectiva que o seguinte projecto surge. Através do estudo do estado da arte, pretende
provar que é possível criar um sistema capaz de agir perante ambientes menos cooperativos,
sendo capaz de detectar e reconhecer uma pessoa que se apresente ao seu alcance.O
sistema proposto PAIRS (Periocular and Iris Recognition Systema) tal como nome indica, efectua
o reconhecimento através de informação extraída da íris e da região periocular (região circundante
aos olhos). O sistema é construído com base em quatro etapas: captura de dados,
pré-processamento, extração de características e reconhecimento. Na etapa de captura de
dados, foi montado um dispositivo de aquisição de imagens com alta resolução com a capacidade
de capturar no espectro NIR (Near-Infra-Red). A captura de imagens neste espectro tem
como principal linha de conta, o favorecimento do reconhecimento através da íris, visto que
a captura de imagens sobre o espectro visível seria mais sensível a variações da luz ambiente.
Posteriormente a etapa de pré-processamento implementada, incorpora todos os módulos do
sistema responsáveis pela detecção do utilizador, avaliação de qualidade de imagem e segmentação
da íris. O modulo de detecção é responsável pelo desencadear de todo o processo, uma
vez que esta é responsável pela verificação da exist?ncia de um pessoa em cena. Verificada
a sua exist?ncia, são localizadas as regiões de interesse correspondentes ? íris e ao periocular,
sendo também verificada a qualidade com que estas foram adquiridas. Concluídas estas
etapas, a íris do olho esquerdo é segmentada e normalizada. Posteriormente e com base em
vários descritores, é extraída a informação biométrica das regiões de interesse encontradas,
e é criado um vector de características biométricas. Por fim, é efectuada a comparação dos
dados biometricos recolhidos, com os já armazenados na base de dados, possibilitando a criação
de uma lista com os níveis de semelhança em termos biometricos, obtendo assim um resposta
final do sistema. Concluída a implementação do sistema, foi adquirido um conjunto de imagens capturadas através do sistema implementado, com a participação de um grupo de voluntários.
Este conjunto de imagens permitiu efectuar alguns testes de desempenho, verificar e afinar
alguns parâmetros, e proceder a optimização das componentes de extração de características e
reconhecimento do sistema. Analisados os resultados foi possível provar que o sistema proposto
tem a capacidade de exercer as suas funções perante condições menos cooperativas
Techniques for Ocular Biometric Recognition Under Non-ideal Conditions
The use of the ocular region as a biometric cue has gained considerable traction due to recent advances in automated iris recognition. However, a multitude of factors can negatively impact ocular recognition performance under unconstrained conditions (e.g., non-uniform illumination, occlusions, motion blur, image resolution, etc.). This dissertation develops techniques to perform iris and ocular recognition under challenging conditions. The first contribution is an image-level fusion scheme to improve iris recognition performance in low-resolution videos. Information fusion is facilitated by the use of Principal Components Transform (PCT), thereby requiring modest computational efforts. The proposed approach provides improved recognition accuracy when low-resolution iris images are compared against high-resolution iris images. The second contribution is a study demonstrating the effectiveness of the ocular region in improving face recognition under plastic surgery. A score-level fusion approach that combines information from the face and ocular regions is proposed. The proposed approach, unlike other previous methods in this application, is not learning-based, and has modest computational requirements while resulting in better recognition performance. The third contribution is a study on matching ocular regions extracted from RGB face images against that of near-infrared iris images. Face and iris images are typically acquired using sensors operating in visible and near-infrared wavelengths of light, respectively. To this end, a sparse representation approach which generates a joint dictionary from corresponding pairs of face and iris images is designed. The proposed joint dictionary approach is observed to outperform classical ocular recognition techniques. In summary, the techniques presented in this dissertation can be used to improve iris and ocular recognition in practical, unconstrained environments
Investigation on advanced image search techniques
Content-based image search for retrieval of images based on the similarity in their visual contents, such as color, texture, and shape, to a query image is an active research area due to its broad applications. Color, for example, provides powerful information for image search and classification. This dissertation investigates advanced image search techniques and presents new color descriptors for image search and classification and robust image enhancement and segmentation methods for iris recognition.
First, several new color descriptors have been developed for color image search. Specifically, a new oRGB-SIFT descriptor, which integrates the oRGB color space and the Scale-Invariant Feature Transform (SIFT), is proposed for image search and classification. The oRGB-SIFT descriptor is further integrated with other color SIFT features to produce the novel Color SIFT Fusion (CSF), the Color Grayscale SIFT Fusion (CGSF), and the CGSF+PHOG descriptors for image category search with applications to biometrics. Image classification is implemented using a novel EFM-KNN classifier, which combines the Enhanced Fisher Model (EFM) and the K Nearest Neighbor (KNN) decision rule. Experimental results on four large scale, grand challenge datasets have shown that the proposed oRGB-SIFT descriptor improves recognition performance upon other color SIFT descriptors, and the CSF, the CGSF, and the CGSF+PHOG descriptors perform better than the other color SIFT descriptors. The fusion of both Color SIFT descriptors (CSF) and Color Grayscale SIFT descriptor (CGSF) shows significant improvement in the classification performance, which indicates that various color-SIFT descriptors and grayscale-SIFT descriptor are not redundant for image search.
Second, four novel color Local Binary Pattern (LBP) descriptors are presented for scene image and image texture classification. Specifically, the oRGB-LBP descriptor is derived in the oRGB color space. The other three color LBP descriptors, namely, the Color LBP Fusion (CLF), the Color Grayscale LBP Fusion (CGLF), and the CGLF+PHOG descriptors, are obtained by integrating the oRGB-LBP descriptor with some additional image features. Experimental results on three large scale, grand challenge datasets have shown that the proposed descriptors can improve scene image and image texture classification performance.
Finally, a new iris recognition method based on a robust iris segmentation approach is presented for improving iris recognition performance. The proposed robust iris segmentation approach applies power-law transformations for more accurate detection of the pupil region, which significantly reduces the candidate limbic boundary search space for increasing detection accuracy and efficiency. As the limbic circle, which has a center within a close range of the pupil center, is selectively detected, the eyelid detection approach leads to improved iris recognition performance. Experiments using the Iris Challenge Evaluation (ICE) database show the effectiveness of the proposed method
A multi-biometric iris recognition system based on a deep learning approach
YesMultimodal biometric systems have been widely
applied in many real-world applications due to its ability to
deal with a number of significant limitations of unimodal
biometric systems, including sensitivity to noise, population
coverage, intra-class variability, non-universality, and
vulnerability to spoofing. In this paper, an efficient and
real-time multimodal biometric system is proposed based
on building deep learning representations for images of
both the right and left irises of a person, and fusing the
results obtained using a ranking-level fusion method. The
trained deep learning system proposed is called IrisConvNet
whose architecture is based on a combination of Convolutional
Neural Network (CNN) and Softmax classifier to
extract discriminative features from the input image without
any domain knowledge where the input image represents
the localized iris region and then classify it into one of N
classes. In this work, a discriminative CNN training scheme
based on a combination of back-propagation algorithm and
mini-batch AdaGrad optimization method is proposed for
weights updating and learning rate adaptation, respectively.
In addition, other training strategies (e.g., dropout method,
data augmentation) are also proposed in order to evaluate
different CNN architectures. The performance of the proposed
system is tested on three public datasets collected
under different conditions: SDUMLA-HMT, CASIA-Iris-
V3 Interval and IITD iris databases. The results obtained
from the proposed system outperform other state-of-the-art
of approaches (e.g., Wavelet transform, Scattering transform,
Local Binary Pattern and PCA) by achieving a Rank-1 identification rate of 100% on all the employed databases
and a recognition time less than one second per person
Robust gait recognition under variable covariate conditions
PhDGait is a weak biometric when compared to face, fingerprint or iris because it can be easily
affected by various conditions. These are known as the covariate conditions and include clothing,
carrying, speed, shoes and view among others. In the presence of variable covariate conditions
gait recognition is a hard problem yet to be solved with no working system reported.
In this thesis, a novel gait representation, the Gait Flow Image (GFI), is proposed to extract
more discriminative information from a gait sequence. GFI extracts the relative motion of body
parts in different directions in separate motion descriptors. Compared to the existing model-free
gait representations, GFI is more discriminative and robust to changes in covariate conditions.
In this thesis, gait recognition approaches are evaluated without the assumption on cooperative
subjects, i.e. both the gallery and the probe sets consist of gait sequences under different
and unknown covariate conditions. The results indicate that the performance of the existing approaches
drops drastically under this more realistic set-up. It is argued that selecting the gait
features which are invariant to changes in covariate conditions is the key to developing a gait
recognition system without subject cooperation. To this end, the Gait Entropy Image (GEnI) is
proposed to perform automatic feature selection on each pair of gallery and probe gait sequences.
Moreover, an Adaptive Component and Discriminant Analysis is formulated which seamlessly
integrates the feature selection method with subspace analysis for fast and robust recognition.
Among various factors that affect the performance of gait recognition, change in viewpoint
poses the biggest problem and is treated separately. A novel approach to address this problem is
proposed in this thesis by using Gait Flow Image in a cross view gait recognition framework with
the view angle of a probe gait sequence unknown. A Gaussian Process classification technique
is formulated to estimate the view angle of each probe gait sequence. To measure the similarity
of gait sequences across view angles, the correlation of gait sequences from different views is
modelled using Canonical Correlation Analysis and the correlation strength is used as a similarity
measure. This differs from existing approaches, which reconstruct gait features in different views
through 2D view transformation or 3D calibration. Without explicit reconstruction, the proposed
method can cope with feature mis-match across view and is more robust against feature noise
- …