4 research outputs found

    Human-artificial intelligence approaches for secure analysis in CAPTCHA codes

    Get PDF
    CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) has long been used to keep automated bots from misusing web services by leveraging human-artificial intelligence (HAI) interactions to distinguish whether the user is a human or a computer program. Various CAPTCHA schemes have been proposed over the years, principally to increase usability and security against emerging bots and hackers performing malicious operations. However, automated attacks have effectively cracked all common conventional schemes, and the majority of present CAPTCHA methods are also vulnerable to human-assisted relay attacks. Invisible reCAPTCHA and some approaches have not yet been cracked. However, with the introduction of fourth-generation bots accurately mimicking human behavior, a secure CAPTCHA would be hardly designed without additional special devices. Almost all cognitive-based CAPTCHAs with sensor support have not yet been compromised by automated attacks. However, they are still compromised to human-assisted relay attacks due to having a limited number of challenges and can be only solved using trusted devices. Obviously, cognitive-based CAPTCHA schemes have an advantage over other schemes in the race against security attacks. In this study, as a strong starting point for creating future secure and usable CAPTCHA schemes, we have offered an overview analysis of HAI between computer users and computers under the security aspects of open problems, difficulties, and opportunities of current CAPTCHA schemes.Web of Science20221art. no.

    Research trends on CAPTCHA: A systematic literature

    Get PDF
    The advent of technology has crept into virtually all sectors and this has culminated in automated processes making use of the Internet in executing various tasks and actions. Web services have now become the trend when it comes to providing solutions to mundane tasks. However, this development comes with the bottleneck of authenticity and intent of users. Providers of these Web services, whether as a platform, as a software or as an Infrastructure use various human interaction proof’s (HIPs) to validate authenticity and intent of its users. Completely automated public turing test to tell computer and human apart (CAPTCHA), a form of IDS in web services is advantageous. Research into CAPTCHA can be grouped into two -CAPTCHA development and CAPTCH recognition. Selective learning and convolutionary neural networks (CNN) as well as deep convolutionary neural network (DCNN) have become emerging trends in both the development and recognition of CAPTCHAs. This paper reviews critically over fifty article publications that shows the current trends in the area of the CAPTCHA scheme, its development and recognition mechanisms and the way forward in helping to ensure a robust and yet secure CAPTCHA development in guiding future research endeavor in the subject domain

    No Bot Expects the DeepCAPTCHA! Introducing Immutable Adversarial Examples, with Applications to CAPTCHA Generation

    Get PDF
    Recent advances in Deep Learning (DL) allow for solving complex AI problems that used to be considered very hard. While this progress has advanced many fields, it is considered to be bad news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), the security of which rests on the hardness of some learning problems. In this paper we introduce DeepCAPTCHA, a new and secure CAPTCHA scheme based on adversarial examples, an inherit limitation of the current Deep Learning networks. These adversarial examples are constructed inputs, either synthesized from scratch or computed by adding a small and specific perturbation called adversarial noise to correctly classified items, causing the targeted DL network to misclassify them. We show that plain adversarial noise is insufficient to achieve secure CAPTCHA schemes, which leads us to introduce immutable adversarial noise — an adversarial noise that is resistant to removal attempts. In this work we implement a proof of concept system, and its analysis shows that the scheme offers high security and good usability compared to the best previously existing CAPTCHAs

    No Bot Expects the DeepCAPTCHA! Introducing Immutable Adversarial Examples, with Applications to CAPTCHA Generation

    Get PDF
    Recent advances in Deep Learning (DL) allow for solving complex AI problems that used to be considered very hard. While this progress has advanced many fields, it is considered to be bad news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), the security of which rests on the hardness of some learning problems. In this paper we introduce DeepCAPTCHA, a new and secure CAPTCHA scheme based on adversarial examples, an inherit limitation of the current Deep Learning networks. These adversarial examples are constructed inputs, either synthesized from scratch or computed by adding a small and specific perturbation called adversarial noise to correctly classified items, causing the targeted DL network to misclassify them. We show that plain adversarial noise is insufficient to achieve secure CAPTCHA schemes, which leads us to introduce immutable adversarial noise — an adversarial noise that is resistant to removal attempts. In this work we implement a proof of concept system, and its analysis shows that the scheme offers high security and good usability compared to the best previously existing CAPTCHAs
    corecore