5,202 research outputs found
A Comparative Evaluation of Heart Rate Estimation Methods using Face Videos
This paper presents a comparative evaluation of methods for remote heart rate
estimation using face videos, i.e., given a video sequence of the face as
input, methods to process it to obtain a robust estimation of the subjects
heart rate at each moment. Four alternatives from the literature are tested,
three based in hand crafted approaches and one based on deep learning. The
methods are compared using RGB videos from the COHFACE database. Experiments
show that the learning-based method achieves much better accuracy than the hand
crafted ones. The low error rate achieved by the learning based model makes
possible its application in real scenarios, e.g. in medical or sports
environments.Comment: Accepted in "IEEE International Workshop on Medical Computing
(MediComp) 2020
How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals
Fake portrait video generation techniques have been posing a new threat to
the society with photorealistic deep fakes for political propaganda, celebrity
imitation, forged evidences, and other identity related manipulations.
Following these generation techniques, some detection approaches have also been
proved useful due to their high classification accuracy. Nevertheless, almost
no effort was spent to track down the source of deep fakes. We propose an
approach not only to separate deep fakes from real videos, but also to discover
the specific generative model behind a deep fake. Some pure deep learning based
approaches try to classify deep fakes using CNNs where they actually learn the
residuals of the generator. We believe that these residuals contain more
information and we can reveal these manipulation artifacts by disentangling
them with biological signals. Our key observation yields that the
spatiotemporal patterns in biological signals can be conceived as a
representative projection of residuals. To justify this observation, we extract
PPG cells from real and fake videos and feed these to a state-of-the-art
classification network for detecting the generative model per video. Our
results indicate that our approach can detect fake videos with 97.29% accuracy,
and the source model with 93.39% accuracy.Comment: To be published in the proceedings of 2020 IEEE/IAPR International
Joint Conference on Biometrics (IJCB
DeepFakes Detection Based on Heart Rate Estimation: Single- and Multi-frame
This chapter describes a DeepFake detection framework based on physiological measurement. In particular, we consider information related to the heart rate using remote photoplethysmography (rPPG). rPPG methods analyze video sequences looking for subtle color changes in the human skin, revealing the presence of human blood under the tissues. This chapter explores to what extent rPPG is useful for the detection of DeepFake videos. We analyze the recent fake detector named DeepFakesON-Phys that is based on a Convolutional Attention Network (CAN), which extracts spatial and temporal information from video frames, analyzing and combining both sources to better detect fake videos. DeepFakesON-Phys has been experimentally evaluated using the latest public databases in the field: Celeb-DF v2 and DFDC. The results achieved for DeepFake detection based on a single frame are over 98% AUC (Area Under the Curve) on both databases, proving the success of fake detectors based on physiological measurement to detect the latest DeepFake videos. In this chapter, we also propose and study heuristical and statistical approaches for performing continuous DeepFake detection by combining scores from consecutive frames with low latency and high accuracy (100% on the Celeb-DF v2 evaluation dataset). We show that combining scores extracted from short-time video sequences can improve the discrimination power of DeepFakesON-PhysThis work has been supported by projects: PRIMA (H2020-MSCA-ITN2019-860315), TRESPASS-ETN (H2020-MSCA-ITN-2019-860813), BIBECA (MINECO/FEDER RTI2018-101248-B-I00), and COST CA16101 (MULTI-FORESEE). J. H.-O. is supported by a PhD fellowship from UA
FaceForensics++: Learning to Detect Manipulated Facial Images
The rapid progress in synthetic image generation and manipulation has now
come to a point where it raises significant concerns for the implications
towards society. At best, this leads to a loss of trust in digital content, but
could potentially cause further harm by spreading false information or fake
news. This paper examines the realism of state-of-the-art image manipulations,
and how difficult it is to detect them, either automatically or by humans. To
standardize the evaluation of detection methods, we propose an automated
benchmark for facial manipulation detection. In particular, the benchmark is
based on DeepFakes, Face2Face, FaceSwap and NeuralTextures as prominent
representatives for facial manipulations at random compression level and size.
The benchmark is publicly available and contains a hidden test set as well as a
database of over 1.8 million manipulated images. This dataset is over an order
of magnitude larger than comparable, publicly available, forgery datasets.
Based on this data, we performed a thorough analysis of data-driven forgery
detectors. We show that the use of additional domainspecific knowledge improves
forgery detection to unprecedented accuracy, even in the presence of strong
compression, and clearly outperforms human observers.Comment: Video: https://youtu.be/x2g48Q2I2Z
- …