7 research outputs found

    Embedded noninteractive continuous bot detection

    Get PDF
    Multiplayer online computer games are quickly growing in popularity, with millions of players logging in every day. While most play in accordance with the rules set up by the game designers, some choose to utilize artificially intelligent assistant programs, a.k.a. bots, to gain an unfair advantage over other players. In this article we demonstrate how an embedded noninteractive test can be used to prevent automatic artificially intelligent players from illegally participating in online game-play. Our solution has numerous advantages over traditional tests, such as its nonobtrusive nature, continuous verification, and simple noninteractive and outsourcing-proof design. © 2008 ACM

    Could you define that in bot terms?:Requesting, creating and using bots on Reddit

    Get PDF
    Bots are estimated to account for well over half of all web traffic, yet they remain an understudied topic in HCI. In this paper we present the findings of an analysis of 2284 submissions across three discussion groups dedicated to the request, creation and discussion of bots on Reddit. We set out to examine the qualities and functionalities of bots and the practical and social challenges surrounding their creation and use. Our findings highlight the prevalence of misunderstandings around the capabilities of bots, misalignments in discourse between novices who request and more expert members who create them, and the prevalence of requests that are deemed to be inappropriate for the Reddit community. In discussing our findings, we suggest future directions for the design and development of tools that support more carefully guided and reflective approaches to bot development for novices, and tools to support exploring the consequences of contextually-inappropriate bot ideas

    Evaluating the usability and security of a video CAPTCHA

    Get PDF
    A CAPTCHA is a variation of the Turing test, in which a challenge is used to distinguish humans from computers (`bots\u27) on the internet. They are commonly used to prevent the abuse of online services. CAPTCHAs discriminate using hard articial intelligence problems: the most common type requires a user to transcribe distorted characters displayed within a noisy image. Unfortunately, many users and them frustrating and break rates as high as 60% have been reported (for Microsoft\u27s Hotmail). We present a new CAPTCHA in which users provide three words (`tags\u27) that describe a video. A challenge is passed if a user\u27s tag belongs to a set of automatically generated ground-truth tags. In an experiment, we were able to increase human pass rates for our video CAPTCHAs from 69.7% to 90.2% (184 participants over 20 videos). Under the same conditions, the pass rate for an attack submitting the three most frequent tags (estimated over 86,368 videos) remained nearly constant (5% over the 20 videos, roughly 12.9% over a separate sample of 5146 videos). Challenge videos were taken from YouTube.com. For each video, 90 tags were added from related videos to the ground-truth set; security was maintained by pruning all tags with a frequency 0.6%. Tag stemming and approximate matching were also used to increase human pass rates. Only 20.1% of participants preferred text-based CAPTCHAs, while 58.2% preferred our video-based alternative. Finally, we demonstrate how our technique for extending the ground truth tags allows for different usability/security trade-offs, and discuss how it can be applied to other types of CAPTCHAs

    “Could You Define That in Bot Terms?” : Requesting, Creating and Using Bots on Reddit

    Get PDF
    Bots are estimated to account for well over half of all web traffic, yet they remain an understudied topic in HCI. In this paper we present the findings of an analysis of 2284 submissions across three discussion groups dedicated to the request, creation and discussion of bots on Reddit. We set out to examine the qualities and functionalities of bots and the practical and social challenges surrounding their creation and use. Our findings highlight the prevalence of misunderstandings around the capabilities of bots, misalignments in discourse between novices who request and more expert members who create them, and the prevalence of requests that are deemed to be inappropriate for the Reddit community. In discussing our findings, we suggest future directions for the design and development of tools that support more carefully guided and reflective approaches to bot development for novices, and tools to support exploring the consequences of contextuallyinappropriate bot ideas

    Using Novel Image-based Interactional Proofs and Source Randomization for Prevention of Web Bots

    Get PDF
    This work presents our efforts on preventing the web bots to illegitimately access web resources. As the first technique, we present SEMAGE (SEmantically MAtching imaGEs), a new image-based CAPTCHA that capitalizes on the human ability to define and comprehend image content and to establish semantic relationships between them. As the second technique, we present NOID - a "NOn-Intrusive Web Bot Defense system" that aims at creating a three tiered defence system against web automation programs or web bots. NOID is a server side technique and prevents the web bots from accessing web resources by inherently hiding the HTML elements of interest by randomization and obfuscation in the HTML responses. A SEMAGE challenge asks a user to select semantically related images from a given image set. SEMAGE has a two-factor design where in order to pass a challenge the user is required to figure out the content of each image and then understand and identify semantic relationship between a subset of them. Most of the current state-of-the-art image-based systems like Assira only require the user to solve the first level, i.e., image recognition. Utilizing the semantic correlation between images to create more secure and user-friendly challenges makes SEMAGE novel. SEMAGE does not suffer from limitations of traditional image-based approaches such as lacking customization and adaptability. SEMAGE unlike the current Text based systems is also very user friendly with a high fun factor. We conduct a first of its kind large-scale user study involving 174 users to gauge and compare accuracy and usability of SEMAGE with existing state-of-the-art CAPTCHA systems like reCAPTCHA (text-based) and Asirra (image-based). The user study further reinstates our points and shows that users achieve high accuracy using our system and consider our system to be fun and easy. We also design a novel server-side and non-intrusive web bot defense system, NOID, to prevent web bots from accessing web resources by inherently hiding and randomizing HTML elements. Specifically, to prevent web bots uniquely identifying HTML elements for later automation, NOID randomizes name/id parameter values of essential HTML elements such as "input textbox", "textarea" and "submit button" in each HTTP form page. In addition, to prevent powerful web bots from identifying special user-action HTML elements by analyzing the content of their accompanied "label text" HTML tags, we enhance NOID by adding a component, Label Concealer, which hides label indicators by replacing "label text" HTML tags with randomized images. To further prevent more powerful web bots identifying HTML elements by recognizing their relative positions or surrounding elements in the web pages, we enhance NOID by adding another component, Element Trapper, which obfuscates important HTML elements' surroundings by adding decoy elements without compromising usability. We evaluate NOID against five powerful state-of-the-art web bots including XRumer, SENuke, Magic Submitter, Comment Blaster, and UWCS on several popular open source web platforms including phpBB, Simple Machine Forums (SMF), and Wordpress. According to our evaluation, NOID can prevent all these web bots automatically sending spam on these web platforms with reasonable overhead
    corecore