131 research outputs found
AFFECT-PRESERVING VISUAL PRIVACY PROTECTION
The prevalence of wireless networks and the convenience of mobile cameras enable many new video applications other than security and entertainment. From behavioral diagnosis to wellness monitoring, cameras are increasing used for observations in various educational and medical settings. Videos collected for such applications are considered protected health information under privacy laws in many countries. Visual privacy protection techniques, such as blurring or object removal, can be used to mitigate privacy concern, but they also obliterate important visual cues of affect and social behaviors that are crucial for the target applications. In this dissertation, we propose to balance the privacy protection and the utility of the data by preserving the privacy-insensitive information, such as pose and expression, which is useful in many applications involving visual understanding.
The Intellectual Merits of the dissertation include a novel framework for visual privacy protection by manipulating facial image and body shape of individuals, which: (1) is able to conceal the identity of individuals; (2) provide a way to preserve the utility of the data, such as expression and pose information; (3) balance the utility of the data and capacity of the privacy protection.
The Broader Impacts of the dissertation focus on the significance of privacy protection on visual data, and the inadequacy of current privacy enhancing technologies in preserving affect and behavioral attributes of the visual content, which are highly useful for behavior observation in educational and medical settings. This work in this dissertation represents one of the first attempts in achieving both goals simultaneously
A False Sense of Privacy: Towards a Reliable Evaluation Methodology for the Anonymization of Biometric Data
Biometric data contains distinctive human traits such as facial features or gait patterns. The use of biometric data permits an individuation so exact that the data is utilized effectively in identification and authentication systems. But for this same reason, privacy protections become indispensably necessary. Privacy protection is extensively afforded by the technique of anonymization. Anonymization techniques protect sensitive personal data from biometrics by obfuscating or removing information that allows linking records to the generating individuals, to achieve high levels of anonymity. However, our understanding and possibility to develop effective anonymization relies, in equal parts, on the effectiveness of the methods employed to evaluate anonymization performance. In this paper, we assess the state-of-the-art methods used to evaluate the performance of anonymization techniques for facial images and for gait patterns. We demonstrate that the state-of-the-art evaluation methods have serious and frequent shortcomings. In particular, we find that the underlying assumptions of the state-of-the-art are quite unwarranted. State-of-the-art methods generally assume a difficult recognition scenario and thus a weak adversary. However, that assumption causes state-of-the-art evaluations to grossly overestimate the performance of the anonymization. Therefore, we propose a strong adversary which is aware of the anonymization in place. This adversary model implements an appropriate measure of anonymization performance. We improve the selection process for the evaluation dataset, and we reduce the numbers of identities contained in the dataset while ensuring that these identities remain easily distinguishable from one another. Our novel evaluation methodology surpasses the state-of-the-art because we measure worst-case performance and so deliver a highly reliable evaluation of biometric anonymization techniques
Soft Biometric Analysis: MultiPerson and RealTime Pedestrian Attribute Recognition in Crowded Urban Environments
Traditionally, recognition systems were only based on human hard biometrics. However,
the ubiquitous CCTV cameras have raised the desire to analyze human biometrics from
far distances, without people attendance in the acquisition process. Highresolution
face closeshots
are rarely available at far distances such that facebased
systems cannot
provide reliable results in surveillance applications. Human soft biometrics such as body
and clothing attributes are believed to be more effective in analyzing human data collected
by security cameras.
This thesis contributes to the human soft biometric analysis in uncontrolled environments
and mainly focuses on two tasks: Pedestrian Attribute Recognition (PAR) and person reidentification
(reid).
We first review the literature of both tasks and highlight the history
of advancements, recent developments, and the existing benchmarks. PAR and person reid
difficulties are due to significant distances between intraclass
samples, which originate
from variations in several factors such as body pose, illumination, background, occlusion,
and data resolution. Recent stateoftheart
approaches present endtoend
models that
can extract discriminative and comprehensive feature representations from people. The
correlation between different regions of the body and dealing with limited learning data
is also the objective of many recent works. Moreover, class imbalance and correlation
between human attributes are specific challenges associated with the PAR problem.
We collect a large surveillance dataset to train a novel gender recognition model suitable
for uncontrolled environments. We propose a deep residual network that extracts several
posewise
patches from samples and obtains a comprehensive feature representation. In
the next step, we develop a model for multiple attribute recognition at once. Considering
the correlation between human semantic attributes and class imbalance, we respectively
use a multitask
model and a weighted loss function. We also propose a multiplication
layer on top of the backbone features extraction layers to exclude the background features
from the final representation of samples and draw the attention of the model to the
foreground area.
We address the problem of person reid
by implicitly defining the receptive fields of
deep learning classification frameworks. The receptive fields of deep learning models
determine the most significant regions of the input data for providing correct decisions.
Therefore, we synthesize a set of learning data in which the destructive regions (e.g.,
background) in each pair of instances are interchanged. A segmentation module
determines destructive and useful regions in each sample, and the label of synthesized
instances are inherited from the sample that shared the useful regions in the synthesized
image. The synthesized learning data are then used in the learning phase and help
the model rapidly learn that the identity and background regions are not correlated.
Meanwhile, the proposed solution could be seen as a data augmentation approach that
fully preserves the label information and is compatible with other data augmentation
techniques.
When reid
methods are learned in scenarios where the target person appears with identical garments in the gallery, the visual appearance of clothes is given the most
importance in the final feature representation. Clothbased
representations are not
reliable in the longterm
reid
settings as people may change their clothes. Therefore,
developing solutions that ignore clothing cues and focus on identityrelevant
features are
in demand. We transform the original data such that the identityrelevant
information of
people (e.g., face and body shape) are removed, while the identityunrelated
cues (i.e.,
color and texture of clothes) remain unchanged. A learned model on the synthesized
dataset predicts the identityunrelated
cues (shortterm
features). Therefore, we train a
second model coupled with the first model and learns the embeddings of the original data
such that the similarity between the embeddings of the original and synthesized data is
minimized. This way, the second model predicts based on the identityrelated
(longterm)
representation of people.
To evaluate the performance of the proposed models, we use PAR and person reid
datasets, namely BIODI, PETA, RAP, Market1501,
MSMTV2,
PRCC, LTCC, and MIT
and compared our experimental results with stateoftheart
methods in the field.
In conclusion, the data collected from surveillance cameras have low resolution, such
that the extraction of hard biometric features is not possible, and facebased
approaches
produce poor results. In contrast, soft biometrics are robust to variations in data quality.
So, we propose approaches both for PAR and person reid
to learn discriminative features
from each instance and evaluate our proposed solutions on several publicly available
benchmarks.This thesis was prepared at the University of Beria Interior, IT Instituto de Telecomunicações, Soft Computing and Image Analysis Laboratory (SOCIA Lab), Covilhã Delegation, and was submitted to the University of Beira Interior for defense in a public examination session
Biometric Systems
Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study
Advancing the technology of sclera recognition
PhD ThesisEmerging biometric traits have been suggested recently to overcome
some challenges and issues related to utilising traditional human
biometric traits such as the face, iris, and fingerprint. In particu-
lar, iris recognition has achieved high accuracy rates under Near-
InfraRed (NIR) spectrum and it is employed in many applications for
security and identification purposes. However, as modern imaging
devices operate in the visible spectrum capturing colour images, iris
recognition has faced challenges when applied to coloured images
especially with eye images which have a dark pigmentation. Other
issues with iris recognition under NIR spectrum are the constraints on
the capturing process resulting in failure-to-enrol, and degradation in
system accuracy and performance. As a result, the research commu-
nity investigated using other traits to support the iris biometric in the
visible spectrum such as the sclera.
The sclera which is commonly known as the white part of the eye
includes a complex network of blood vessels and veins surrounding
the eye. The vascular pattern within the sclera has different formations
and layers providing powerful features for human identification. In
addition, these blood vessels can be acquired in the visible spectrum
and thus can be applied using ubiquitous camera-based devices. As a
consequence, recent research has focused on developing sclera recog-
nition. However, sclera recognition as any biometric system has issues
and challenges which need to be addressed. These issues are mainly
related to sclera segmentation, blood vessel enhancement, feature ex-
traction, template registration, matching and decision methods. In
addition, employing the sclera biometric in the wild where relaxed
imaging constraints are utilised has introduced more challenges such
as illumination variation, specular reflections, non-cooperative user
capturing, sclera blocked region due to glasses and eyelashes, variation
in capturing distance, multiple gaze directions, and eye rotation.
The aim of this thesis is to address such sclera biometric challenges
and highlight the potential of this trait. This also might inspire further
research on tackling sclera recognition system issues. To overcome the
vii
above-mentioned issues and challenges, three major contributions are
made which can be summarised as 1) designing an efficient sclera
recognition system under constrained imaging conditions which in-
clude new sclera segmentation, blood vessel enhancement, vascular
binary network mapping and feature extraction, and template registra-
tion techniques; 2) introducing a novel sclera recognition system under
relaxed imaging constraints which exploits novel sclera segmentation,
sclera template rotation alignment and distance scaling methods, and
complex sclera features; 3) presenting solutions to tackle issues related
to applying sclera recognition in a real-time application such as eye
localisation, eye corner and gaze detection, together with a novel image
quality metric.
The evaluation of the proposed contributions is achieved using five
databases having different properties representing various challenges
and issues. These databases are the UBIRIS.v1, UBIRIS.v2, UTIRIS,
MICHE, and an in-house database. The results in terms of segmen-
tation accuracy, Equal Error Rate (EER), and processing time show
significant improvement in the proposed systems compared to state-
of-the-art methods.Ministry of Higher Education and
Scientific Research in Iraq and the Iraqi Cultural Attach´e in Londo
Recommended from our members
A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques.
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Higher Committee for Education Development in Ira
A novel face recognition system in unconstrained environments using a convolutional neural network
The performance of most face recognition systems (FRS) in unconstrained environments is widely noted to be sub-optimal. One reason for this poor performance may be due to the lack of highly effective image pre-processing approaches, which are typically required before the feature extraction and classification stages. Furthermore, it is noted that only minimal face recognition issues are typically considered in most FRS, thus limiting the wide applicability of most FRS in real-life scenarios. Thus, it is envisaged that developing more effective pre-processing techniques, in addition to selecting the correct features for classification, will significantly improve the performance of FRS.
The thesis investigates different research works on FRS, its techniques and challenges in unconstrained environments. The thesis proposes a novel image enhancement technique as a pre-processing approach for FRS. The proposed enhancement technique improves on the overall FRS model resulting into an increased recognition performance. Also, a selection of novel hybrid features has been presented that is extracted from the enhanced facial images within the dataset to improve recognition performance.
The thesis proposes a novel evaluation function as a component within the image enhancement technique to improve face recognition in unconstrained environments. Also, a defined scale mechanism was designed within the evaluation function to evaluate the enhanced images such that extreme values depict too dark or too bright images. The proposed algorithm enables the system to automatically select the most appropriate enhanced face image without human intervention. Evaluation of the proposed algorithm was done using standard parameters, where it is demonstrated to outperform existing image enhancement techniques both quantitatively and qualitatively.
The thesis confirms the effectiveness of the proposed image enhancement technique towards face recognition in unconstrained environments using the convolutional neural network. Furthermore, the thesis presents a selection of hybrid features from the enhanced image that results in effective image classification. Different face datasets were selected where each face image was enhanced using the proposed and existing image enhancement technique prior to the selection of features and classification task. Experiments on the different face datasets showed increased and better performance using the proposed approach.
The thesis shows that putting an effective image enhancement technique as a preprocessing approach can improve the performance of FRS as compared to using unenhanced face images. Also, the right features to be extracted from the enhanced face dataset as been shown to be an important factor for the improvement of FRS. The thesis made use of standard face datasets to confirm the effectiveness of the proposed method. On the LFW face dataset, an improved performance recognition rate was obtained when considering all the facial conditions within the face dataset.Thesis (PhD)--University of Pretoria, 2018.CSIR-DST Inter programme bursaryElectrical, Electronic and Computer EngineeringPhDUnrestricte
- …