61 research outputs found
WSD: Wild Selfie Dataset for Face Recognition in Selfie Images
With the rise of handy smart phones in the recent years, the trend of
capturing selfie images is observed. Hence efficient approaches are required to
be developed for recognising faces in selfie images. Due to the short distance
between the camera and face in selfie images, and the different visual effects
offered by the selfie apps, face recognition becomes more challenging with
existing approaches. A dataset is needed to be developed to encourage the study
to recognize faces in selfie images. In order to alleviate this problem and to
facilitate the research on selfie face images, we develop a challenging Wild
Selfie Dataset (WSD) where the images are captured from the selfie cameras of
different smart phones, unlike existing datasets where most of the images are
captured in controlled environment. The WSD dataset contains 45,424 images from
42 individuals (i.e., 24 female and 18 male subjects), which are divided into
40,862 training and 4,562 test images. The average number of images per subject
is 1,082 with minimum and maximum number of images for any subject are 518 and
2,634, respectively. The proposed dataset consists of several challenges,
including but not limited to augmented reality filtering, mirrored images,
occlusion, illumination, scale, expressions, view-point, aspect ratio, blur,
partial faces, rotation, and alignment. We compare the proposed dataset with
existing benchmark datasets in terms of different characteristics. The
complexity of WSD dataset is also observed experimentally, where the
performance of the existing state-of-the-art face recognition methods is poor
on WSD dataset, compared to the existing datasets. Hence, the proposed WSD
dataset opens up new challenges in the area of face recognition and can be
beneficial to the community to study the specific challenges related to selfie
images and develop improved methods for face recognition in selfie images
AI facial recognition and biometric detection: balancing consumer rights and corporate interests
© 2021 IEEE. This is the accepted manuscript version of a conference proceeding which has been published in final form at https://doi.org/10.1109/ICCST49569.2021.9717403The purpose of this study is two-fold. Firstly, to critically assess the extent to which corporate actors can lawfully use artificial intelligence (AI) technology for real-time facial recognition biometric detection. Secondly, to suggest and appraise some procedural safeguards to make the use of these systems by private actors compatible with consumers' right to protection of their personal data under the General Data Protection Regulation (GDPR). This study seeks to fill an existing gap in the literature. It concludes that unless, the three variables suggested in the study are considered, that is, âwhetherâ, âwhenâ and âhowâ corporate actors can legally use AI for real-time facial recognition biometric detection, the use of this technology will violate consumers' data protection rights.Final Accepted Versio
Classification of Occluded Objects using Fast Recurrent Processing
Recurrent neural networks are powerful tools for handling incomplete data
problems in computer vision, thanks to their significant generative
capabilities. However, the computational demand for these algorithms is too
high to work in real time, without specialized hardware or software solutions.
In this paper, we propose a framework for augmenting recurrent processing
capabilities into a feedforward network without sacrificing much from
computational efficiency. We assume a mixture model and generate samples of the
last hidden layer according to the class decisions of the output layer, modify
the hidden layer activity using the samples, and propagate to lower layers. For
visual occlusion problem, the iterative procedure emulates feedforward-feedback
loop, filling-in the missing hidden layer activity with meaningful
representations. The proposed algorithm is tested on a widely used dataset, and
shown to achieve 2 improvement in classification accuracy for occluded
objects. When compared to Restricted Boltzmann Machines, our algorithm shows
superior performance for occluded object classification.Comment: arXiv admin note: text overlap with arXiv:1409.8576 by other author
Long range facial image acquisition and quality
Abstract This chapter introduces issues in long range facial image acquisition and measures for image quality and their usage. Section 1, on image acquisition for face recognition discusses issues in lighting, sensor, lens, blur issues, which impact short-range biometrics, but are more pronounced in long-range biometrics. Section 2 introduces the design of controlled experiments for long range face, and why they are needed. Section 3 introduces some of the weather and atmospheric effects that occur for long-range imaging, with numerous of examples. Section 4 addresses measurements of âsystem qualityâ, including image-quality measures and their use in prediction of face recognition algorithm. That section introduces the concept of failure prediction and techniques for analyzing different âquality â measures. The section ends with a discussion of post-recognition âfailure prediction â and its potential role as a feedback mechanism in acquisition. Each section includes a collection of open-ended questions to challenge the reader to think about the concepts more deeply. For some of the questions we answer them after they are introduced; others are left as an exercise for the reader. 1 Image Acquisition Before any recognition can even be attempted, they system must acquire an image of the subject with sufficient quality and resolution to detect and recognize the face. The issues examined in this section are the sensor-issues in lighting, image/sensor resolution issues, the field-of view, the depth of field, and effects of motion blur
Colour Fusion in Face Authentication System Based on Visible and Near Infrared Images
In this paper, face authentication using images taken in visible and near-infrared spectra (NIR) is studied. Visible images are in RGB colour space and near-infrared images are in gray levels colour space. First, the performance of system in each of the primary colour spaces of visible and near-infrared spectrum is evaluated that the verification process is based on the Normalised Correlation measure within the LDA feature space. In order to utilize the information of colour images, the scores associated to an adaptively selected subset of the colour based classifiers are then fused in the decision level. The selection process is based on a sequential search technique called the "plus L and take away R" algorithm. The sum rule and svm rule is used for fusing the related scores. Our extensive experimental studies using the HFB face database demonstrate that using the proposed method, the performance of the system considerably improves as compared to the individual Visible-based or NIR-based face verification systems
Evolving faces from principal components
A system that uses an underlying genetic algorithm to evolve faces in response to user selection is described. The descriptions of faces used by the system are derived from a statistical analysis of a set of faces. The faces used for generation are transformed to an average shape by defining locations around each face and morphing. The shape-free images and shape vectors are then separately subjected to principal components analysis. Novel faces are generated by recombining the image components ("eigenfaces") and then morphing their shape according to the principal components of the shape vectors ("eigenshapes"). The prototype system indicates that such statistical analysis of a set of faces can produce plausible, randomly generated photographic images
Pembangkitan Citra Wajah dari Sketch Wajah menggunakan CycleGAN
Penggunaan sketsa wajah merupakan alat bantu yang digunakan lembaga penegak hukum dalam melakukan proses identifikasi tersangka tindak kriminal. Sketsa wajah digunakan ketika tidak terdapat foto dari tersangka tindak kriminal di tempat kejadian perkara. Sketsa wajah digunakan dalam proses identifikasi mugshot pada database dengan menggunakan sistem face recognition, dikarenakan sketsa wajah memiliki modalitas yang berbeda dengan citra wajah seperti halnya tekstur wajah, maka dibangkitkanlah citra wajah baru dari input sketsa wajah yang dimiliki sehingga dapat memiliki tekstur yang dapat menyerupai citra wajah. CycleGAN merupakan metode yang digunakan dalam melakukan tugas imageto-image translation, metode tersebut dapat digunakan dalam melakukan style transfer. Oleh karena itu, dalam penelitian tugas akhir ini dikembangkan sebuah model yang berfungsi untuk membangkitkan citra wajah dari sketsa wajah sehingga dapat mengolah sketsa wajah menjadi citra wajah yang memilki tekstur wajah
Pembangkitan Citra Wajah dari Sketch Wajah menggunakan CycleGAN
Penggunaan sketsa wajah merupakan alat bantu yang digunakan lembaga penegak hukum dalam melakukan proses identifikasi tersangka tindak kriminal. Sketsa wajah digunakan ketika tidak terdapat foto dari tersangka tindak kriminal di tempat kejadian perkara. Sketsa wajah digunakan dalam proses identifikasi mugshot pada database dengan menggunakan sistem face recognition, dikarenakan sketsa wajah memiliki modalitas yang berbeda dengan citra wajah seperti halnya tekstur wajah, maka dibangkitkanlah citra wajah baru dari input sketsa wajah yang dimiliki sehingga dapat memiliki tekstur yang dapat menyerupai citra wajah. CycleGAN merupakan metode yang digunakan dalam melakukan tugas imageto-image translation, metode tersebut dapat digunakan dalam melakukan style transfer. Oleh karena itu, dalam penelitian tugas akhir ini dikembangkan sebuah model yang berfungsi untuk membangkitkan citra wajah dari sketsa wajah sehingga dapat mengolah sketsa wajah menjadi citra wajah yang memilki tekstur wajah
Fast, collaborative acquisition of multi-view face images using a camera network and its impact on real-time human identification
Biometric systems have been typically designed to operate under controlled environments based on previously acquired photographs and videos. But recent terror attacks, security threats and intrusion attempts have necessitated a transition to modern biometric systems that can identify humans in real-time under unconstrained environments. Distributed camera networks are appropriate for unconstrained scenarios because they can provide multiple views of a scene, thus offering tolerance against variable pose of a human subject and possible occlusions. In dynamic environments, the face images are continually arriving at the base station with different quality, pose and resolution. Designing a fusion strategy poses significant challenges. Such a scenario demands that only the relevant information is processed and the verdict (match / no match) regarding a particular subject is quickly (yet accurately) released so that more number of subjects in the scene can be evaluated.;To address these, we designed a wireless data acquisition system that is capable of acquiring multi-view faces accurately and at a rapid rate. The idea of epipolar geometry is exploited to get high multi-view face detection rates. Face images are labeled to their corresponding poses and are transmitted to the base station. To evaluate the impact of face images acquired using our real-time face image acquisition system on the overall recognition accuracy, we interface it with a face matching subsystem and thus create a prototype real-time multi-view face recognition system. For front face matching, we use the commercial PittPatt software. For non-frontal matching, we use a Local binary Pattern based classifier. Matching scores obtained from both frontal and non-frontal face images are fused for final classification. Our results show significant improvement in recognition accuracy, especially when the front face images are of low resolution
- âŠ