18,974 research outputs found
Infrared face recognition: a comprehensive review of methodologies and databases
Automatic face recognition is an area with immense practical potential which
includes a wide range of commercial and law enforcement applications. Hence it
is unsurprising that it continues to be one of the most active research areas
of computer vision. Even after over three decades of intense research, the
state-of-the-art in face recognition continues to improve, benefitting from
advances in a range of different research fields such as image processing,
pattern recognition, computer graphics, and physiology. Systems based on
visible spectrum images, the most researched face recognition modality, have
reached a significant level of maturity with some practical success. However,
they continue to face challenges in the presence of illumination, pose and
expression changes, as well as facial disguises, all of which can significantly
decrease recognition accuracy. Amongst various approaches which have been
proposed in an attempt to overcome these limitations, the use of infrared (IR)
imaging has emerged as a particularly promising research direction. This paper
presents a comprehensive and timely review of the literature on this subject.
Our key contributions are: (i) a summary of the inherent properties of infrared
imaging which makes this modality promising in the context of face recognition,
(ii) a systematic review of the most influential approaches, with a focus on
emerging common trends as well as key differences between alternative
methodologies, (iii) a description of the main databases of infrared facial
images available to the researcher, and lastly (iv) a discussion of the most
promising avenues for future research.Comment: Pattern Recognition, 2014. arXiv admin note: substantial text overlap
with arXiv:1306.160
Deep Perceptual Mapping for Thermal to Visible Face Recognition
Cross modal face matching between the thermal and visible spectrum is a much
de- sired capability for night-time surveillance and security applications. Due
to a very large modality gap, thermal-to-visible face recognition is one of the
most challenging face matching problem. In this paper, we present an approach
to bridge this modality gap by a significant margin. Our approach captures the
highly non-linear relationship be- tween the two modalities by using a deep
neural network. Our model attempts to learn a non-linear mapping from visible
to thermal spectrum while preserving the identity in- formation. We show
substantive performance improvement on a difficult thermal-visible face
dataset. The presented approach improves the state-of-the-art by more than 10%
in terms of Rank-1 identification and bridge the drop in performance due to the
modality gap by more than 40%.Comment: BMVC 2015 (oral
An Extensive Review on Spectral Imaging in Biometric Systems: Challenges and Advancements
Spectral imaging has recently gained traction for face recognition in
biometric systems. We investigate the merits of spectral imaging for face
recognition and the current challenges that hamper the widespread deployment of
spectral sensors for face recognition. The reliability of conventional face
recognition systems operating in the visible range is compromised by
illumination changes, pose variations and spoof attacks. Recent works have
reaped the benefits of spectral imaging to counter these limitations in
surveillance activities (defence, airport security checks, etc.). However, the
implementation of this technology for biometrics, is still in its infancy due
to multiple reasons. We present an overview of the existing work in the domain
of spectral imaging for face recognition, different types of modalities and
their assessment, availability of public databases for sake of reproducible
research as well as evaluation of algorithms, and recent advancements in the
field, such as, the use of deep learning-based methods for recognizing faces
from spectral images
Unobtrusive and pervasive video-based eye-gaze tracking
Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe
Prediction model of alcohol intoxication from facial temperature dynamics based on K-means clustering driven by evolutionary computing
Alcohol intoxication is a significant phenomenon, affecting many social areas, including work procedures or car driving. Alcohol causes certain side effects including changing the facial thermal distribution, which may enable the contactless identification and classification of alcohol-intoxicated people. We adopted a multiregional segmentation procedure to identify and classify symmetrical facial features, which reliably reflects the facial-temperature variations while subjects are drinking alcohol. Such a model can objectively track alcohol intoxication in the form of a facial temperature map. In our paper, we propose the segmentation model based on the clustering algorithm, which is driven by the modified version of the Artificial Bee Colony (ABC) evolutionary optimization with the goal of facial temperature features extraction from the IR (infrared radiation) images. This model allows for a definition of symmetric clusters, identifying facial temperature structures corresponding with intoxication. The ABC algorithm serves as an optimization process for an optimal cluster's distribution to the clustering method the best approximate individual areas linked with gradual alcohol intoxication. In our analysis, we analyzed a set of twenty volunteers, who had IR images taken to reflect the process of alcohol intoxication. The proposed method was represented by multiregional segmentation, allowing for classification of the individual spatial temperature areas into segmentation classes. The proposed method, besides single IR image modelling, allows for dynamical tracking of the alcohol-temperature features within a process of intoxication, from the sober state up to the maximum observed intoxication level.Web of Science118art. no. 99
Underwater Fish Detection with Weak Multi-Domain Supervision
Given a sufficiently large training dataset, it is relatively easy to train a
modern convolution neural network (CNN) as a required image classifier.
However, for the task of fish classification and/or fish detection, if a CNN
was trained to detect or classify particular fish species in particular
background habitats, the same CNN exhibits much lower accuracy when applied to
new/unseen fish species and/or fish habitats. Therefore, in practice, the CNN
needs to be continuously fine-tuned to improve its classification accuracy to
handle new project-specific fish species or habitats. In this work we present a
labelling-efficient method of training a CNN-based fish-detector (the Xception
CNN was used as the base) on relatively small numbers (4,000) of project-domain
underwater fish/no-fish images from 20 different habitats. Additionally, 17,000
of known negative (that is, missing fish) general-domain (VOC2012) above-water
images were used. Two publicly available fish-domain datasets supplied
additional 27,000 of above-water and underwater positive/fish images. By using
this multi-domain collection of images, the trained Xception-based binary
(fish/not-fish) classifier achieved 0.17% false-positives and 0.61%
false-negatives on the project's 20,000 negative and 16,000 positive holdout
test images, respectively. The area under the ROC curve (AUC) was 99.94%.Comment: Published in the 2019 International Joint Conference on Neural
Networks (IJCNN-2019), Budapest, Hungary, July 14-19, 2019,
https://www.ijcnn.org/ , https://ieeexplore.ieee.org/document/885190
Multi-Modality Human Action Recognition
Human action recognition is very useful in many applications in various areas, e.g. video surveillance, HCI (Human computer interaction), video retrieval, gaming and security. Recently, human action recognition becomes an active research topic in computer vision and pattern recognition. A number of action recognition approaches have been proposed. However, most of the approaches are designed on the RGB images sequences, where the action data was collected by RGB/intensity camera. Thus the recognition performance is usually related to various occlusion, background, and lighting conditions of the image sequences. If more information can be provided along with the image sequences, more data sources other than the RGB video can be utilized, human actions could be better represented and recognized by the designed computer vision system.;In this dissertation, the multi-modality human action recognition is studied. On one hand, we introduce the study of multi-spectral action recognition, which involves the information from different spectrum beyond visible, e.g. infrared and near infrared. Action recognition in individual spectra is explored and new methods are proposed. Then the cross-spectral action recognition is also investigated and novel approaches are proposed in our work. On the other hand, since the depth imaging technology has made a significant progress recently, where depth information can be captured simultaneously with the RGB videos. The depth-based human action recognition is also investigated. I first propose a method combining different type of depth data to recognize human actions. Then a thorough evaluation is conducted on spatiotemporal interest point (STIP) based features for depth-based action recognition. Finally, I advocate the study of fusing different features for depth-based action analysis. Moreover, human depression recognition is studied by combining facial appearance model as well as facial dynamic model
- …