2 research outputs found
SupCon-MPL-DP: Supervised Contrastive Learning with Meta Pseudo Labels for Deepfake Image Detection
Recently, there has been considerable research on deepfake detection. However, most existing methods face challenges in adapting to the advancements in new generative models within unknown domains. In addition, the emergence of new generative models capable of producing and editing high-quality images, such as diffusion, consistency, and LCM, poses a challenge for traditional deepfake training models. These advancements highlight the need for adapting and evolving existing deepfake detection techniques to effectively counter the threats posed by sophisticated image manipulation technologies. In this paper, our objective is to detect deepfake videos in unknown domains using unlabeled data. Specifically, our proposed approach employs Meta Pseudo Labels (MPL) with supervised contrastive learning, so-called SupCon-MPL, allowing the model to be trained on unlabeled images. MPL involves the simultaneous training of both a teacher model and a student model, where the teacher model generates pseudo labels utilized to train the student model. This method aims to enhance the adaptability and robustness of deepfake detection systems against emerging unknown domains. Supervised contrastive learning utilizes labels to compare samples within similar classes more intensively, while encouraging greater distinction from samples in dissimilar classes. This facilitates the learning of features in a diverse set of deepfake images by the model, consequently contributing to the performance of deepfake detection in unknown domains. When utilizing the ResNet50 model as the backbone, SupCon-MPL exhibited an improvement of 1.58% in accuracy compared with traditional MPL in known domain detection. Moreover, in the same generation of unknown domain detection, there was a 1.32% accuracy enhancement, while in the detection of post-generation unknown domains, there was an 8.74% increase in accuracy
Early Fire Detection System by Using Automatic Synthetic Dataset Generation Model Based on Digital Twins
Fire is amorphous and occurs differently depending on the space, environment, and material of the fire. In particular, the early detection of fires is a very important task in preventing large-scale accidents; however, there are currently almost no learnable early fire datasets for machine learning. This paper proposes an early fire detection system optimized for certain spaces using a digital-twin-based automatic fire learning data generation model for each space. The proposed method first automatically generates realistic particle-simulation-based synthetic fire data on an RGB-D image matched to the view angle of a monitoring camera to build a digital twin environment of the real space. In other words, our method generates synthetic fire data according to various fire situations in each specific space and then performs transfer learning using a state-of-the-art detection model with these datasets and distributes them to AIoT devices in the real space. Synthetic fire data generation optimized for a space can increase the accuracy and reduce the false detection rate of existing fire detection models that are not adaptive to space