31 research outputs found

    The robustness of animated text CAPTCHAs

    Get PDF
    PhD ThesisCAPTCHA is standard security technology that uses AI techniques to tells computer and human apart. The most widely used CAPTCHA are text-based CAPTCHA schemes. The robustness and usability of these CAPTCHAs relies mainly on the segmentation resistance mechanism that provides robustness against individual character recognition attacks. However, many CAPTCHAs have been shown to have critical flaws caused by many exploitable invariants in their design, leaving only a few CAPTCHA schemes resistant to attacks, including ReCAPTCHA and the Wikipedia CAPTCHA. Therefore, new alternative approaches to add motion to the CAPTCHA are used to add another dimension to the character cracking algorithms by animating the distorted characters and the background, which are also supported by tracking resistance mechanisms that prevent the attacks from identifying the main answer through frame-toframe attacks. These technologies are used in many of the new CAPTCHA schemes including the Yahoo CAPTCHA, CAPTCHANIM, KillBot CAPTCHAs, non-standard CAPTCHA and NuCAPTCHA. Our first question: can the animated techniques included in the new CAPTCHA schemes provide the required level of robustness against the attacks? Our examination has shown many of the CAPTCHA schemes that use the animated features can be broken through tracking attacks including the CAPTCHA schemes that uses complicated tracking resistance mechanisms. The second question: can the segmentation resistance mechanism used in the latest standard text-based CAPTCHA schemes still provide the additional required level of resistance against attacks that are not present missed in animated schemes? Our test against the latest version of ReCAPTCHA and the Wikipedia CAPTCHA exposed vulnerability problems against the novel attacks mechanisms that achieved a high success rate against them. The third question: how much space is available to design an animated text-based CAPTCHA scheme that could provide a good balance between security and usability? We designed a new animated text-based CAPTCHA using guidelines we designed based on the results of our attacks on standard and animated text-based CAPTCHAs, and we then tested its security and usability to answer this question. ii In this thesis, we put forward different approaches to examining the robustness of animated text-based CAPTCHA schemes and other standard text-based CAPTCHA schemes against segmentation and tracking attacks. Our attacks included several methodologies that required thinking skills in order to distinguish the animated text from the other animated noises, including the text distorted by highly tracking resistance mechanisms that displayed them partially as animated segments and which looked similar to noises in other CAPTCHA schemes. These attacks also include novel attack mechanisms and other mechanisms that uses a recognition engine supported by attacking methods that exploit the identified invariants to recognise the connected characters at once. Our attacks also provided a guideline for animated text-based CAPTCHAs that could provide resistance to tracking and segmentation attacks which we designed and tested in terms of security and usability, as mentioned before. Our research also contributes towards providing a toolbox for breaking CAPTCHAs in addition to a list of robustness and usability issues in the current CAPTCHA design that can be used to provide a better understanding of how to design a more resistant CAPTCHA scheme

    Seeing the full picture: the case for extending security ceremony analysis

    Get PDF
    The concept of the security ceremony was introduced a few years ago to complement the concept of the security protocol with everything about the context in which a protocol is run. In particular, such context involves the human executors of a protocol. When including human actors, human protocols become the focus, hence the concept of the security ceremony can be seen as part of the domain of socio-technical studies. This paper addresses the problem of ceremony analysis lacking the full view of human protocols. This paper categorises existing security ceremony analysis work and illustrates how the ceremony picture could be extended to support a more comprehensive analysis. The paper explores recent weaknesses found on the Amazon\u27s web interface to illustrate different approaches to the analysis of the full ceremony picture

    Toward Robust Video Event Detection and Retrieval Under Adversarial Constraints

    Get PDF
    The continuous stream of videos that are uploaded and shared on the Internet has been leveraged by computer vision researchers for a myriad of detection and retrieval tasks, including gesture detection, copy detection, face authentication, etc. However, the existing state-of-the-art event detection and retrieval techniques fail to deal with several real-world challenges (e.g., low resolution, low brightness and noise) under adversary constraints. This dissertation focuses on these challenges in realistic scenarios and demonstrates practical methods to address the problem of robustness and efficiency within video event detection and retrieval systems in five application settings (namely, CAPTCHA decoding, face liveness detection, reconstructing typed input on mobile devices, video confirmation attack, and content-based copy detection). Specifically, for CAPTCHA decoding, I propose an automated approach which can decode moving-image object recognition (MIOR) CAPTCHAs faster than humans. I showed that not only are there inherent weaknesses in current MIOR CAPTCHA designs, but that several obvious countermeasures (e.g., extending the length of the codeword) are not viable. More importantly, my work highlights the fact that the choice of underlying hard problem selected by the designers of a leading commercial solution falls into a solvable subclass of computer vision problems. For face liveness detection, I introduce a novel approach to bypass modern face authentication systems. More specifically, by leveraging a handful of pictures of the target user taken from social media, I show how to create realistic, textured, 3D facial models that undermine the security of widely used face authentication solutions. My framework makes use of virtual reality (VR) systems, incorporating along the way the ability to perform animations (e.g., raising an eyebrow or smiling) of the facial model, in order to trick liveness detectors into believing that the 3D model is a real human face. I demonstrate that such VR-based spoofing attacks constitute a fundamentally new class of attacks that point to a serious weaknesses in camera-based authentication systems. For reconstructing typed input on mobile devices, I proposed a method that successfully transcribes the text typed on a keyboard by exploiting video of the user typing, even from significant distances and from repeated reflections. This feat allows us to reconstruct typed input from the image of a mobile phone’s screen on a user’s eyeball as reflected through a nearby mirror, extending the privacy threat to include situations where the adversary is located around a corner from the user. To assess the viability of a video confirmation attack, I explored a technique that exploits the emanations of changes in light to reveal the programs being watched. I leverage the key insight that the observable emanations of a display (e.g., a TV or monitor) during presentation of the viewing content induces a distinctive flicker pattern that can be exploited by an adversary. My proposed approach works successfully in a number of practical scenarios, including (but not limited to) observations of light effusions through the windows, on the back wall, or off the victim’s face. My empirical results show that I can successfully confirm hypotheses while capturing short recordings (typically less than 4 minutes long) of the changes in brightness from the victim’s display from a distance of 70 meters. Lastly, for content-based copy detection, I take advantage of a new temporal feature to index a reference library in a manner that is robust to the popular spatial and temporal transformations in pirated videos. My technique narrows the detection gap in the important area of temporal transformations applied by would-be pirates. My large-scale evaluation on real-world data shows that I can successfully detect infringing content from movies and sports clips with 90.0% precision at a 71.1% recall rate, and can achieve that accuracy at an average time expense of merely 5.3 seconds, outperforming the state of the art by an order of magnitude.Doctor of Philosoph

    Implementation of Captcha as Graphical Passwords For Multi Security

    Get PDF
    To validate human users, passwords play a vital role in computer security. Graphical passwords offer more security than text-based passwords, this is due to the reason that the user replies on graphical passwords. Normal users choose regular or unforgettable passwords which can be easy to guess and are prone to Artificial Intelligence problems. Many harder to guess passwords involve more mathematical or computational complications. To counter these hard AI problems a new Captcha technology known as, Captcha as Graphical Password (CaRP), from a novel family of graphical password systems has been developed. CaRP is both a Captcha and graphical password scheme in one. CaRP mainly helps in hard AI problems and security issues like online guess attacks, relay attacks, and shoulder-surfing attacks if combined with dual view technologies. Pass-points, a new methodology from CaRP, addresses the image hotspot problem in graphical password systems which lead to weak passwords. CaRP also implements a combination of images or colors with text which generates session passwords, that helps in authentication because with session passwords every time a new password is generated and is used only once. To counter shoulder surfing, CaRP provides cheap security and usability and thus improves online security. CaRP is not a panacea; however, it gives protection and usability to some online applications for improving online security

    Application of information theory and statistical learning to anomaly detection

    Get PDF
    In today\u27s highly networked world, computer intrusions and other attacks area constant threat. The detection of such attacks, especially attacks that are new or previously unknown, is important to secure networks and computers. A major focus of current research efforts in this area is on anomaly detection.;In this dissertation, we explore applications of information theory and statistical learning to anomaly detection. Specifically, we look at two difficult detection problems in network and system security, (1) detecting covert channels, and (2) determining if a user is a human or bot. We link both of these problems to entropy, a measure of randomness information content, or complexity, a concept that is central to information theory. The behavior of bots is low in entropy when tasks are rigidly repeated or high in entropy when behavior is pseudo-random. In contrast, human behavior is complex and medium in entropy. Similarly, covert channels either create regularity, resulting in low entropy, or encode extra information, resulting in high entropy. Meanwhile, legitimate traffic is characterized by complex interdependencies and moderate entropy. In addition, we utilize statistical learning algorithms, Bayesian learning, neural networks, and maximum likelihood estimation, in both modeling and detecting of covert channels and bots.;Our results using entropy and statistical learning techniques are excellent. By using entropy to detect covert channels, we detected three different covert timing channels that were not detected by previous detection methods. Then, using entropy and Bayesian learning to detect chat bots, we detected 100% of chat bots with a false positive rate of only 0.05% in over 1400 hours of chat traces. Lastly, using neural networks and the idea of human observational proofs to detect game bots, we detected 99.8% of game bots with no false positives in 95 hours of traces. Our work shows that a combination of entropy measures and statistical learning algorithms is a powerful and highly effective tool for anomaly detection

    WARDOG: Awareness detection watchbog for Botnet infection on the host device

    Get PDF
    Botnets constitute nowadays one of the most dangerous security threats worldwide. High volumes of infected machines are controlled by a malicious entity and perform coordinated cyber-attacks. The problem will become even worse in the era of the Internet of Things (IoT) as the number of insecure devices is going to be exponentially increased. This paper presents WARDOG – an awareness and digital forensic system that informs the end-user of the botnet’s infection, exposes the botnet infrastructure, and captures verifiable data that can be utilized in a court of law. The responsible authority gathers all information and automatically generates a unitary documentation for the case. The document contains undisputed forensic information, tracking all involved parties and their role in the attack. The deployed security mechanisms and the overall administration setting ensures non-repudiation of performed actions and enforces accountability. The provided properties are verified through theoretic analysis. In simulated environment, the effectiveness of the proposed solution, in mitigating the botnet operations, is also tested against real attack strategies that have been captured by the FORTHcert honeypots, overcoming state-of-the-art solutions. Moreover, a preliminary version is implemented in real computers and IoT devices, highlighting the low computational/communicational overheads of WARDOG in the field

    Combating Robocalls to Enhance Trust in Converged Telephony

    Get PDF
    Telephone scams are now on the rise and without effective countermeasures there is no stopping. The number of scam/spam calls people receive is increasing every day. YouMail estimates that June 2021 saw 4.4 billion robocalls in the United States and the Federal Trade Commission (FTC) phone complaint portal receives millions of complaints about such fraudulent and unwanted calls each year. Voice scams have become such a serious problem that people often no longer pick up calls from unknown callers. In several scams that have been reported widely, the telephony channel is either directly used to reach potential victims or as a way to monetize scams that are advertised online, as in the case of tech support scams. The vision of this research is to bring trust back to the telephony channel. We believe this can be done by stopping unwanted and fraud calls and leveraging smartphones to offer a novel interaction model that can help enhance the trust in voice interactions. Thus, our research explores defenses against unwanted calls that include blacklisting of known fraudulent callers, detecting robocalls in presence of caller ID spoofing and proposing a novel virtual assistant that can stop more sophisticated robocalls without user intervention. We first explore phone blacklists to stop unwanted calls based on the caller ID received when a call arrives. We study how to automatically build blacklists from multiple data sources and evaluate the effectiveness of such blacklists in stopping current robocalls. We also used insights gained from this process to increase detection of more sophisticated robocalls and improve the robustness of our defense system against malicious callers who can use techniques like caller ID spoofing. To address the threat model where caller ID is spoofed, we introduce the notion of a virtual assistant. To this end, we developed a Smartphone based app named RobocallGuard which can pick up calls from unknown callers on behalf of the user and detect and filter out unwanted calls. We conduct a user study that shows that users are comfortable with a virtual assistant stopping unwanted calls on their behalf. Moreover, most users reported that such a virtual assistant is beneficial to them. Finally, we expand our threat model and introduce RobocallGuardPlus which can effectively block targeted robocalls. RobocallGuardPlus also picks up calls from unknown callers on behalf of the callee and engages in a natural conversation with the caller. RobocallGuardPlus uses a combination of NLP based machine learning models to determine if the caller is a human or a robocaller. To the best of our knowledge, we are the first to develop such a defense system that can interact with the caller and detect robocalls where robocallers utilize caller ID spoofing and voice activity detection to bypass the defense mechanism. Security analysis explored by us shows that such a system is capable of stopping more sophisticated robocallers that might emerge in the near future. By making these contributions, we believe we can bring trust back to the telephony channel and provide a better call experience for everyone.Ph.D
    corecore