19 research outputs found
TAPCHA: An Invisible CAPTCHA Scheme
TAPCHA is a universal CAPTCHA scheme designed for touch-enabled smart devices such as
smartphones, tablets and smartwatches. The main difference between TAPCHA and other
CAPTCHA schemes is that TAPCHA retains its security by making the CAPTCHA test ‘invisible’ for
the bot. It then utilises context effects to maintain the readability of the instruction for human users
which eventually guarantees the usability of the scheme. Two reference designs, namely TAPCHA
SHAPE & SHADE and TAPCHA MULTI are developed to demonstrate the use of this scheme
Embedded noninteractive continuous bot detection
Multiplayer online computer games are quickly growing in popularity, with millions of players logging in every day. While most play in accordance with the rules set up by the game designers, some choose to utilize artificially intelligent assistant programs, a.k.a. bots, to gain an unfair advantage over other players. In this article we demonstrate how an embedded noninteractive test can be used to prevent automatic artificially intelligent players from illegally participating in online game-play. Our solution has numerous advantages over traditional tests, such as its nonobtrusive nature, continuous verification, and simple noninteractive and outsourcing-proof design. © 2008 ACM
jCAPTCHA: Accessible Human Validation
CAPTCHAs are a widely deployed mechanism for ensuring that a web site user is a human, and not a software agent. They ought to be relatively easy for a human to solve, but hard for software to interpret. Most CAPTCHAs are visual, and this marginalises users with visual impairments. A variety of audible CAPTCHAs have been trialled but these have not been very successful, largely because they are easily interpreted by automated tools and, at the same time, tend to be too challenging for the very humans they are supposed to verify. In this paper an alternative audio CAPTCHA, jCAPTCHA (Jumbled Words CAPTCHA), is presented. We report on the evaluation of jCAPTCHA by 272 human users, of whom 169 used screen readers, both in terms of usability and resistance to software interpretation
Utilizando CAPTCHAs para Garantir a Segurança contra Fraudes no Marketing Digital / Utilizing CAPTCHA to Guarantee Security Against Fraud in Digital Marketing
Anunciar pela Internet é fundamental para o sucesso de diversos negócios hoje em dia. A Internet evoluiu para um ponto onde foi possÃvel desenvolver um modelo de negócios para o marketing digital baseado em anúncios online, através da consolidação deste modelo de negócios. Entretanto, observa-se que alguns anunciantes de conteúdo são desonestos, usam ferramentas automáticas para gerar tráfego e lucram ao defraudar os anunciantes. Similarmente, alguns anunciantes usam ferramentas automáticas para clicar nos anúncios dos seus concorrentes, com o objetivo de esgotar o orçamento com marketing. Neste artigo, uma abordagem para prevenir a fraude de clique é proposta, através do uso de CAPTCHA clicáveis, diferentemente de diversas abordagens já estudadas anteriormente, que focam na detecção, uma vez que a fraude já tenha ocorrido
Comparing Usability of Text-Based and Image-Based CAPTCHAs
CAPTCHAs are challenges that are designed to distinguish humans from computer programs, especially to prevent automated attacks. The designs of CAPTCHAs are constantly changing to match advances in artificial intelligence and computer vision, and the capabilities of malicious actors. As new CAPTCHAs are implemented, they need to be freshly evaluated in terms of usability alongside security. This paper describes the design for a laboratory experiment to assess the usability of text-based and image-based CAPTCHAs by comparing how long they take to complete and how accurately they can be solved.Master of Science in Information Scienc
A Survey on Breaking Technique of Text-Based CAPTCHA
The CAPTCHA has become an important issue in multimedia security. Aimed at a commonly used text-based CAPTCHA, this paper outlines some typical methods and summarizes the technological progress in text-based CAPTCHA breaking. First, the paper presents a comprehensive review of recent developments in the text-based CAPTCHA breaking field. Second, a framework of text-based CAPTCHA breaking technique is proposed. And the framework mainly consists of preprocessing, segmentation, combination, recognition, postprocessing, and other modules. Third, the research progress of the technique involved in each module is introduced, and some typical methods of segmentation and recognition are compared and analyzed. Lastly, the paper discusses some problems worth further research
No Bot Expects the DeepCAPTCHA! Introducing Immutable Adversarial Examples, with Applications to CAPTCHA Generation
Recent advances in Deep Learning (DL) allow for solving complex AI problems that used to be considered very hard. While this progress has advanced many fields, it is considered to be bad news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), the security of which rests on the hardness of some learning problems.
In this paper we introduce DeepCAPTCHA, a new and secure CAPTCHA scheme based on adversarial examples, an inherit limitation of the current Deep Learning networks. These adversarial examples are constructed inputs, either synthesized from scratch or computed by adding a small and specific perturbation called adversarial noise to correctly classified items, causing the targeted DL network to misclassify them. We show that plain adversarial noise is insufficient to achieve secure CAPTCHA schemes, which leads us to introduce immutable adversarial noise — an adversarial noise that is resistant to removal attempts. In this work we implement a proof of concept system, and its analysis shows that the scheme offers high security and good usability compared to the best previously existing CAPTCHAs
No Bot Expects the DeepCAPTCHA! Introducing Immutable Adversarial Examples, with Applications to CAPTCHA Generation
Recent advances in Deep Learning (DL) allow for solving complex AI problems that used to be considered very hard. While this progress has advanced many fields, it is considered to be bad news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), the security of which rests on the hardness of some learning problems.
In this paper we introduce DeepCAPTCHA, a new and secure CAPTCHA scheme based on adversarial examples, an inherit limitation of the current Deep Learning networks. These adversarial examples are constructed inputs, either synthesized from scratch or computed by adding a small and specific perturbation called adversarial noise to correctly classified items, causing the targeted DL network to misclassify them. We show that plain adversarial noise is insufficient to achieve secure CAPTCHA schemes, which leads us to introduce immutable adversarial noise — an adversarial noise that is resistant to removal attempts. In this work we implement a proof of concept system, and its analysis shows that the scheme offers high security and good usability compared to the best previously existing CAPTCHAs
SoK: The Ghost Trilemma
Trolls, bots, and sybils distort online discourse and compromise the security
of networked platforms. User identity is central to the vectors of attack and
manipulation employed in these contexts. However it has long seemed that, try
as it might, the security community has been unable to stem the rising tide of
such problems. We posit the Ghost Trilemma, that there are three key properties
of identity -- sentience, location, and uniqueness -- that cannot be
simultaneously verified in a fully-decentralized setting. Many
fully-decentralized systems -- whether for communication or social coordination
-- grapple with this trilemma in some way, perhaps unknowingly. We examine the
design space, use cases, problems with prior approaches, and possible paths
forward. We sketch a proof of this trilemma and outline options for practical,
incrementally deployable schemes to achieve an acceptable tradeoff of trust in
centralized trust anchors, decentralized operation, and an ability to withstand
a range of attacks, while protecting user privacy.Comment: 22 pages with 1 figure and 8 table