183 research outputs found
Using Generative Adversarial Networks to Break and Protect Text Captchas
Text-based CAPTCHAs remains a popular scheme for distinguishing between a legitimate human user and an automated program. This article presents a novel genetic text captcha solver based on the generative adversarial network. As a departure from prior text captcha solvers that require a labor-intensive and time-consuming process to construct, our scheme needs significantly fewer real captchas but yields better performance in solving captchas. Our approach works by first learning a synthesizer to automatically generate synthetic captchas to construct a base solver. It then improves and fine-tunes the base solver using a small number of labeled real captchas. As a result, our attack requires only a small set of manually labeled captchas, which reduces the cost of launching an attack on a captcha scheme. We evaluate our scheme by applying it to 33 captcha schemes, of which 11 are currently used by 32 of the top-50 popular websites. Experimental results demonstrate that our scheme significantly outperforms four prior captcha solvers and can solve captcha schemes where others fail. As a countermeasure, we propose to add imperceptible perturbations onto a captcha image. We demonstrate that our countermeasure can greatly reduce the success rate of the attack
Diff-CAPTCHA: An Image-based CAPTCHA with Security Enhanced by Denoising Diffusion Model
To enhance the security of text CAPTCHAs, various methods have been employed,
such as adding the interference lines on the text, randomly distorting the
characters, and overlapping multiple characters. These methods partly increase
the difficulty of automated segmentation and recognition attacks. However,
facing the rapid development of the end-to-end breaking algorithms, their
security has been greatly weakened. The diffusion model is a novel image
generation model that can generate the text images with deep fusion of
characters and background images. In this paper, an image-click CAPTCHA scheme
called Diff-CAPTCHA is proposed based on denoising diffusion models. The
background image and characters of the CAPTCHA are treated as a whole to guide
the generation process of a diffusion model, thus weakening the character
features available for machine learning, enhancing the diversity of character
features in the CAPTCHA, and increasing the difficulty of breaking algorithms.
To evaluate the security of Diff-CAPTCHA, this paper develops several attack
methods, including end-to-end attacks based on Faster R-CNN and two-stage
attacks, and Diff-CAPTCHA is compared with three baseline schemes, including
commercial CAPTCHA scheme and security-enhanced CAPTCHA scheme based on style
transfer. The experimental results show that diffusion models can effectively
enhance CAPTCHA security while maintaining good usability in human testing
Utilizing GANs for Fraud Detection: Model Training with Synthetic Transaction Data
Anomaly detection is a critical challenge across various research domains,
aiming to identify instances that deviate from normal data distributions. This
paper explores the application of Generative Adversarial Networks (GANs) in
fraud detection, comparing their advantages with traditional methods. GANs, a
type of Artificial Neural Network (ANN), have shown promise in modeling complex
data distributions, making them effective tools for anomaly detection. The
paper systematically describes the principles of GANs and their derivative
models, emphasizing their application in fraud detection across different
datasets. And by building a collection of adversarial verification graphs, we
will effectively prevent fraud caused by bots or automated systems and ensure
that the users in the transaction are real. The objective of the experiment is
to design and implement a fake face verification code and fraud detection
system based on Generative Adversarial network (GANs) algorithm to enhance the
security of the transaction process.The study demonstrates the potential of
GANs in enhancing transaction security through deep learning techniques
CAPTCHA Types and Breaking Techniques: Design Issues, Challenges, and Future Research Directions
The proliferation of the Internet and mobile devices has resulted in
malicious bots access to genuine resources and data. Bots may instigate
phishing, unauthorized access, denial-of-service, and spoofing attacks to
mention a few. Authentication and testing mechanisms to verify the end-users
and prohibit malicious programs from infiltrating the services and data are
strong defense systems against malicious bots. Completely Automated Public
Turing test to tell Computers and Humans Apart (CAPTCHA) is an authentication
process to confirm that the user is a human hence, access is granted. This
paper provides an in-depth survey on CAPTCHAs and focuses on two main things:
(1) a detailed discussion on various CAPTCHA types along with their advantages,
disadvantages, and design recommendations, and (2) an in-depth analysis of
different CAPTCHA breaking techniques. The survey is based on over two hundred
studies on the subject matter conducted since 2003 to date. The analysis
reinforces the need to design more attack-resistant CAPTCHAs while keeping
their usability intact. The paper also highlights the design challenges and
open issues related to CAPTCHAs. Furthermore, it also provides useful
recommendations for breaking CAPTCHAs
Detecting and Mitigating Adversarial Attack
Automating arrhythmia detection from ECG requires a robust and trusted system that retains high accuracy under electrical disturbances. Deep neural networks have become a popular technique for tracing ECG signals, outperforming human experts. Many approaches have reached human-level performance in classifying arrhythmia from ECGs. Even convolutional neural networks are susceptible to adversarial examples as well that can also misclassify ECG signals. Moreover, they do not generalize well on the out-of-distribution dataset. Adversarial attacks are small crafted perturbations injected in the original data which manifest the out-of-distribution shifts in signal to misclassify the correct class. However, these architectures are vulnerable to adversarial attacks as well. The GAN architecture has been employed in recent works to synthesize adversarial ECG signals to increase existing training data. However, they use a disjointed CNN-based classification architecture to detect arrhythmia. Till now, no versatile architecture has been proposed that can detect adversarial examples and classify arrhythmia simultaneously. In this work, we propose two novel conditional generative adversarial networks (GAN), ECG-Adv-GAN and ECG-ATK-GAN, to simultaneously generate ECG signals for different categories and detect cardiac abnormalities. The model is conditioned on class-specific ECG signals to synthesize realistic adversarial examples. Moreover, the ECG-ATK-GAN is robust against adversarial attacked ECG signals and retains high accuracy when exposed to various types of adversarial attacks while classifying arrhythmia. We benchmark our architecture on six different white and black-box attacks and compare them with other recently proposed arrhythmia classification models. When considering the defense strategy, the variation of the adversarial attacks, both targeted and non-targeted, can determine the perturbation by calculating the gradient. Novel defenses are being introduced to improve upon existing techniques to fend off each new attack. This back-and-forth game between attack and defense is persistently recurring, and it became significant to understand the pattern and behavior of the attacker to create a robust defense. One widespread tactic is applying a mathematically based model like Game theory. To analyze this circumstance, we propose a computational framework of game theory to analyze the CNN Classifier's vulnerability, strategy, and outcomes by forming a simultaneous two-player game. We represent the interaction in the Stackelberg Game in Kuhn tree to study players' possible behaviors and actions by applying our Classifier's actual predicted values in CAPTCHA dataset. Thus, we interpret potential attacks in deep learning applications while representing viable defense strategies from the Game theoretical perspective
- …