96 research outputs found
Avatar captcha : telling computers and humans apart via face classification and mouse dynamics.
Bots are malicious, automated computer programs that execute malicious scripts and predefined functions on an affected computer. They pose cybersecurity threats and are one of the most sophisticated and common types of cybercrime tools today. They spread viruses, generate spam, steal personal sensitive information, rig online polls and commit other types of online crime and fraud. They sneak into unprotected systems through the Internet by seeking vulnerable entry points. They access the systemâs resources like a human user does. Now the question arises how do we counter this? How do we prevent bots and on the other hand allow human users to access the system resources? One solution is by designing a CAPTCHA (Completely Automated Public Turing Tests to tell Computers and Humans Apart), a program that can generate and grade tests that most humans can pass but computers cannot. It is used as a tool to distinguish humans from malicious bots. They are a class of Human Interactive Proofs (HIPs) meant to be easily solvable by humans and economically infeasible for computers. Text CAPTCHAs are very popular and commonly used. For each challenge, they generate a sequence of alphabets by distorting standard fonts, requesting users to identify them and type them out. However, they are vulnerable to character segmentation attacks by bots, English language dependent and are increasingly becoming too complex for people to solve. A solution to this is to design Image CAPTCHAs that use images instead of text and require users to identify certain images to solve the challenges. They are user-friendly and convenient for human users and a much more challenging problem for bots to solve. In todayâs Internet world the role of user profiling or user identification has gained a lot of significance. Identity thefts, etc. can be prevented by providing authorized access to resources. To achieve timely response to a security breach frequent user verification is needed. However, this process must be passive, transparent and non-obtrusive. In order for such a system to be practical it must be accurate, efficient and difficult to forge. Behavioral biometric systems are usually less prominent however, they provide numerous and significant advantages over traditional biometric systems. Collection of behavior data is non-obtrusive and cost-effective as it requires no special hardware. While these systems are not unique enough to provide reliable human identification, they have shown to be highly accurate in identity verification. In accomplishing everyday tasks, human beings use different styles, strategies, apply unique skills and knowledge, etc. These define the behavioral traits of the user. Behavioral biometrics attempts to quantify these traits to profile users and establish their identity. Human computer interaction (HCI)-based biometrics comprise of interaction strategies and styles between a human and a computer. These unique user traits are quantified to build profiles for identification. A specific category of HCI-based biometrics is based on recording human interactions with mouse as the input device and is known as Mouse Dynamics. By monitoring the mouse usage activities produced by a user during interaction with the GUI, a unique profile can be created for that user that can help identify him/her. Mouse-based verification approaches do not record sensitive user credentials like usernames and passwords. Thus, they avoid privacy issues. An image CAPTCHA is proposed that incorporates Mouse Dynamics to help fortify it. It displays random images obtained from Yahooâs Flickr. To solve the challenge the user must identify and select a certain class of images. Two theme-based challenges have been designed. They are Avatar CAPTCHA and Zoo CAPTCHA. The former displays human and avatar faces whereas the latter displays different animal species. In addition to the dynamically selected images, while attempting to solve the CAPTCHA, the way each user interacts with the mouse i.e. mouse clicks, mouse movements, mouse cursor screen co-ordinates, etc. are recorded nonobtrusively at regular time intervals. These recorded mouse movements constitute the Mouse Dynamics Signature (MDS) of the user. This MDS provides an additional secure technique to segregate humans from bots. The security of the CAPTCHA is tested by an adversary executing a mouse bot attempting to solve the CAPTCHA challenges
Face recognition using statistical adapted local binary patterns.
Biometrics is the study of methods of recognizing humans based on their behavioral and physical characteristics or traits. Face recognition is one of the biometric modalities that received a great amount of attention from many researchers during the past few decades because of its potential applications in a variety of security domains. Face recognition however is not only concerned with recognizing human faces, but also with recognizing faces of non-biological entities or avatars. Fortunately, the need for secure and affordable virtual worlds is attracting the attention of many researchers who seek to find fast, automatic and reliable ways to identify virtual worldsâ avatars. In this work, I propose new techniques for recognizing avatar faces, which also can be applied to recognize human faces. Proposed methods are based mainly on a well-known and efficient local texture descriptor, Local Binary Pattern (LBP). I am applying different versions of LBP such as: Hierarchical Multi-scale Local Binary Patterns and Adaptive Local Binary Pattern with Directional Statistical Features in the wavelet space and discuss the effect of this application on the performance of each LBP version. In addition, I use a new version of LBP called Local Difference Pattern (LDP) with other well-known descriptors and classifiers to differentiate between human and avatar face images. The original LBP achieves high recognition rate if the tested images are pure but its performance gets worse if these images are corrupted by noise. To deal with this problem I propose a new definition to the original LBP in which the LBP descriptor will not threshold all the neighborhood pixel based on the central pixel value. A weight for each pixel in the neighborhood will be computed, a new value for each pixel will be calculated and then using simple statistical operations will be used to compute the new threshold, which will change automatically, based on the pixelâs values. This threshold can be applied with the original LBP or any other version of LBP and can be extended to work with Local Ternary Pattern (LTP) or any version of LTP to produce different versions of LTP for recognizing noisy avatar and human faces images
Human-artificial intelligence approaches for secure analysis in CAPTCHA codes
CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) has long been used to keep automated bots from misusing web services by leveraging human-artificial intelligence (HAI) interactions to distinguish whether the user is a human or a computer program. Various CAPTCHA schemes have been proposed over the years, principally to increase usability and security against emerging bots and hackers performing malicious operations. However, automated attacks have effectively cracked all common conventional schemes, and the majority of present CAPTCHA methods are also vulnerable to human-assisted relay attacks. Invisible reCAPTCHA and some approaches have not yet been cracked. However, with the introduction of fourth-generation bots accurately mimicking human behavior, a secure CAPTCHA would be hardly designed without additional special devices. Almost all cognitive-based CAPTCHAs with sensor support have not yet been compromised by automated attacks. However, they are still compromised to human-assisted relay attacks due to having a limited number of challenges and can be only solved using trusted devices. Obviously, cognitive-based CAPTCHA schemes have an advantage over other schemes in the race against security attacks. In this study, as a strong starting point for creating future secure and usable CAPTCHA schemes, we have offered an overview analysis of HAI between computer users and computers under the security aspects of open problems, difficulties, and opportunities of current CAPTCHA schemes.Web of Science20221art. no.
The robustness of animated text CAPTCHAs
PhD ThesisCAPTCHA is standard security technology that uses AI techniques to tells computer and
human apart. The most widely used CAPTCHA are text-based CAPTCHA schemes. The
robustness and usability of these CAPTCHAs relies mainly on the segmentation resistance
mechanism that provides robustness against individual character recognition attacks.
However, many CAPTCHAs have been shown to have critical flaws caused by many
exploitable invariants in their design, leaving only a few CAPTCHA schemes resistant to
attacks, including ReCAPTCHA and the Wikipedia CAPTCHA.
Therefore, new alternative approaches to add motion to the CAPTCHA are used to add
another dimension to the character cracking algorithms by animating the distorted
characters and the background, which are also supported by tracking resistance
mechanisms that prevent the attacks from identifying the main answer through frame-toframe
attacks. These technologies are used in many of the new CAPTCHA schemes
including the Yahoo CAPTCHA, CAPTCHANIM, KillBot CAPTCHAs, non-standard
CAPTCHA and NuCAPTCHA.
Our first question: can the animated techniques included in the new CAPTCHA schemes
provide the required level of robustness against the attacks? Our examination has shown
many of the CAPTCHA schemes that use the animated features can be broken through
tracking attacks including the CAPTCHA schemes that uses complicated tracking
resistance mechanisms.
The second question: can the segmentation resistance mechanism used in the latest standard
text-based CAPTCHA schemes still provide the additional required level of resistance
against attacks that are not present missed in animated schemes? Our test against the latest
version of ReCAPTCHA and the Wikipedia CAPTCHA exposed vulnerability problems
against the novel attacks mechanisms that achieved a high success rate against them.
The third question: how much space is available to design an animated text-based
CAPTCHA scheme that could provide a good balance between security and usability? We
designed a new animated text-based CAPTCHA using guidelines we designed based on the
results of our attacks on standard and animated text-based CAPTCHAs, and we then tested
its security and usability to answer this question.
ii
In this thesis, we put forward different approaches to examining the robustness of animated
text-based CAPTCHA schemes and other standard text-based CAPTCHA schemes against
segmentation and tracking attacks. Our attacks included several methodologies that
required thinking skills in order to distinguish the animated text from the other animated
noises, including the text distorted by highly tracking resistance mechanisms that displayed
them partially as animated segments and which looked similar to noises in other
CAPTCHA schemes. These attacks also include novel attack mechanisms and other
mechanisms that uses a recognition engine supported by attacking methods that exploit the
identified invariants to recognise the connected characters at once. Our attacks also
provided a guideline for animated text-based CAPTCHAs that could provide resistance to
tracking and segmentation attacks which we designed and tested in terms of security and
usability, as mentioned before. Our research also contributes towards providing a toolbox
for breaking CAPTCHAs in addition to a list of robustness and usability issues in the
current CAPTCHA design that can be used to provide a better understanding of how to
design a more resistant CAPTCHA scheme
Selected Computing Research Papers Volume 1 June 2012
An Evaluation of Anti-phishing Solutions (Arinze Bona Umeaku) ..................................... 1
A Detailed Analysis of Current Biometric Research Aimed at Improving Online Authentication Systems (Daniel Brown) .............................................................................. 7
An Evaluation of Current Intrusion Detection Systems Research
(Gavin Alexander Burns) .................................................................................................... 13
An Analysis of Current Research on Quantum Key Distribution (Mark Lorraine) ............ 19
A Critical Review of Current Distributed Denial of Service Prevention Methodologies (Paul Mains) ............................................................................................... 29
An Evaluation of Current Computing Methodologies Aimed at Improving the Prevention of SQL Injection Attacks in Web Based Applications (Niall Marsh) .............. 39
An Evaluation of Proposals to Detect Cheating in Multiplayer Online Games (Bradley Peacock) ............................................................................................................... 45
An Empirical Study of Security Techniques Used In Online Banking
(Rajinder D G Singh) .......................................................................................................... 51
A Critical Study on Proposed Firewall Implementation Methods in Modern Networks (Loghin Tivig) .................................................................................................... 5
Facial re-enactment, speech synthesis and the rise of the Deepfake
Emergent technologies in the fields of audio speech synthesis and video facial manipulation have the potential to drastically impact our societal patterns of multimedia consumption. At a time when social media and internet culture is plagued by misinformation, propaganda and âfake newsâ, their latent misuse represents a possible looming threat to fragile systems of information sharing and social democratic discourse. It has thus become increasingly recognised in both academic and mainstream journalism that the ramifications of these tools must be examined to determine what they are and how their widespread availability can be managed.
This research project seeks to examine four emerging software programs â Face2Face, FakeApp , Adobe VoCo and Lyrebird â that are designed to facilitate the synthesis of speech and manipulate facial features in videos. I will explore their positive industry applications and the potentially negative consequences of their release into the public domain. Consideration will be directed to how such consequences and risks can be ameliorated through detection, regulation and education. A final analysis of these three competing threads will then attempt to address whether the practical and commercial applications of these technologies are outweighed by the inherent unethical or illegal uses they engender, and if so; what we can do in response
Virtual YouTubersâ Self-Representation Between Extended and Divided Self
Virtual and Artificial YouTubers (VTubers) show us how the body becomes technologically embedded. They reveal arising complexities within the interface of digital and analog assemblies, bodies, and virtual environments. Thus, VTubers raise questions that are crucial to the core debate about personhood and the human subject in anthropology as well as critical posthumanism. By reading Feminist Anthropology and Critical Posthumanism dos-Ă -dos, the thesis engages with the three VTubers AI Angelica, CodeMiko, and Miquela Sousa. To answer the questions (1) how personhood unfolds in the VTubersâ self-representation(s), (2) how personhood is negotiated with the recipients, and (3) which aspects of the human subject (e.g., gender, race) are reproduced a methodological framework of Netnography and Critical Technocultural Discourse Analysis is applied. The thesis reveals that VTubersâ show a form of personhood in which the reflective self appears and speaks apart from the âI.â This division reflects practices of self-designation in order to navigate between the extended self and the divided self; the content creator and the avatar; between the platform and the VTuber. This way, the self manifests itself simultaneously in the form of overlaps and displacements. Within this form of relationality, the notion of the glitch is reviewed to consider the VTuberâs personhood in respect of the discussion between critical posthumanist and humanist perspectives
- âŠ