Sri Shakthi Institute of Engineering and Technology
Doi
Abstract
Advancements in deep learning have led to the emergence of highly realistic AI-generated videos known as deepfakes. These videos utilize generative models to expertly modify facial features, creating convincingly altered identities or expressions. Despite their complexity, deepfakes pose significant threats by potentially misleading or manipulating individuals, which can undermine trust and have repercussions on legal, political, and social frameworks. To address these challenges, researchers are actively developing strategies to detect deepfake content, essential for safeguarding privacy and combating the spread of manipulated media. This article explores current methods for generating deepfake images and videos, with a focus on facial features and expression alterations. It also provides an overview of publicly available deepfake datasets, crucial for developing and evaluating detection systems. Additionally, the research examines the challenges associated with identifying deepfake face swaps and expression changes, while proposing future research directions to overcome these hurdles. By offering guidance to researchers, the document aims to foster the development of robust solutions for deepfake detection, contributing to the preservation of the integrity and reliability of visual media
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.