992 research outputs found
Face Liveness Detection for Biometric Antispoofing Applications using Color Texture and Distortion Analysis Features
Face recognition is a widely used biometric approach. Face recognition technology has developed rapidly in recent years and it is more direct, user friendly and convenient compared to other methods. But face recognition systems are vulnerable to spoof attacks made by non-real faces. It is an easy way to spoof face recognition systems by facial pictures such as portrait photographs. A secure system needs Liveness detection in order to guard against such spoofing. In this work, face liveness detection approaches are categorized based on the various types techniques used for liveness detection. This categorization helps understanding different spoof attacks scenarios and their relation to the developed solutions. A review of the latest works regarding face liveness detection works is presented. The main aim is to provide a simple path for the future development of novel and more secured face liveness detection approach
Nonlinear kernel based feature maps for blur-sensitive unsharp masking of JPEG images
In this paper, a method for estimating the blur regions of an image is first proposed, resorting to a mixture of linear and nonlinear convolutional kernels. The blur map obtained is then utilized to enhance images such that the enhancement strength is an inverse function of the amount of measured blur. The blur map can also be used for tasks such as attention-based object classification, low light image enhancement, and more. A CNN architecture is trained with nonlinear upsampling layers using a standard blur detection benchmark dataset, with the help of blur target maps. Further, it is proposed to use the same architecture to build maps of areas affected by the typical JPEG artifacts, ringing and blockiness. The blur map and the artifact map pair permit to build an activation map for the enhancement of a (possibly JPEG compressed) image. Extensive experiments on standard test images verify the quality of the maps obtained using the algorithm and their effectiveness in locally controlling the enhancement, for superior perceptual quality. Last but not least, the computation time for generating these maps is much lower than the one of other comparable algorithms
Digital forensic techniques for the reverse engineering of image acquisition chains
In recent years a number of new methods have been developed to detect image forgery. Most forensic techniques use footprints left on images to predict the history of the images. The images, however, sometimes could have gone through a series of processing and modification through their lifetime. It is therefore difficult to detect image tampering as the footprints could be distorted or removed over a complex chain of operations. In this research we propose digital forensic techniques that allow us to reverse engineer and determine history of images that have gone through chains of image acquisition and reproduction.
This thesis presents two different approaches to address the problem. In the first part we propose a novel theoretical framework for the reverse engineering of signal acquisition chains. Based on a simplified chain model, we describe how signals have gone in the chains at different stages using the theory of sampling signals with finite rate of innovation. Under particular conditions, our technique allows to detect whether a given signal has been reacquired through the chain. It also makes possible to predict corresponding important parameters of the chain using acquisition-reconstruction artefacts left on the signal.
The second part of the thesis presents our new algorithm for image recapture detection based on edge blurriness. Two overcomplete dictionaries are trained using the K-SVD approach to learn distinctive blurring patterns from sets of single captured and recaptured images. An SVM classifier is then built using dictionary approximation errors and the mean edge spread width from the training images. The algorithm, which requires no user intervention, was tested on a database that included more than 2500 high quality recaptured images. Our results show that our method achieves a performance rate that exceeds 99% for recaptured images and 94% for single captured images.Open Acces
Development and Clinical Evaluation of an AI Support Tool for Improving Telemedicine Photo Quality
Telemedicine utilization was accelerated during the COVID-19 pandemic, and
skin conditions were a common use case. However, the quality of photographs
sent by patients remains a major limitation. To address this issue, we
developed TrueImage 2.0, an artificial intelligence (AI) model for assessing
patient photo quality for telemedicine and providing real-time feedback to
patients for photo quality improvement. TrueImage 2.0 was trained on 1700
telemedicine images annotated by clinicians for photo quality. On a
retrospective dataset of 357 telemedicine images, TrueImage 2.0 effectively
identified poor quality images (Receiver operator curve area under the curve
(ROC-AUC) =0.78) and the reason for poor quality (Blurry ROC-AUC=0.84, Lighting
issues ROC-AUC=0.70). The performance is consistent across age, gender, and
skin tone. Next, we assessed whether patient-TrueImage 2.0 interaction led to
an improvement in submitted photo quality through a prospective clinical pilot
study with 98 patients. TrueImage 2.0 reduced the number of patients with a
poor-quality image by 68.0%.Comment: 24 pages, 7 figure
Recommended from our members
Face image super-resolution using 2D CCA
In this paper a face super-resolution method using two-dimensional canonical correlation analysis (2D CCA) is presented. A detail compensation step is followed to add high-frequency components to the reconstructed high-resolution face. Unlike most of the previous researches on face super-resolution algorithms that first transform the images into vectors, in our approach the relationship between the high-resolution and the low-resolution face image are maintained in their original 2D representation. In addition, rather than approximating the entire face, different parts of a face image are super-resolved separately to better preserve the local structure. The proposed method is compared with various state-of-the-art super-resolution algorithms using multiple evaluation criteria including face recognition performance. Results on publicly available datasets show that the proposed method super-resolves high quality face images which are very close to the ground-truth and performance gain is not dataset dependent. The method is very efficient in both the training and testing phases compared to the other approaches. © 2013 Elsevier B.V
Evaluating the Sensitivity of Face Presentation Attack Detection Techniques to Images of Varying Resolutions
In the last decades, emerging techniques for face Presentation Attack Detection (PAD) have reported a remarkable performance to detect attack presentations whose attack type and capture conditions are known a priori. However, the generalisation capability of PAD approaches shows a considerable deterioration to detect unknown attacks. In order to tackle those generalisation issues, several PAD techniques have focused on the detection of homogeneous features from known attacks to detect unknown Presentation Attack Instruments without taking into account how some intrinsic image properties such as the image resolution or biometric quality could impact their detection performance. In this work, we carry out a thorough analysis of the sensitivity of several texture descriptors which shows how the use of images with varying resolutions for training leads to a high decrease on the attack detection performance
- …