2,348 research outputs found

    Crime Scene Re-investigation: A Postmortem Analysis of Game Account Stealers' Behaviors

    Full text link
    As item trading becomes more popular, users can change their game items or money into real money more easily. At the same time, hackers turn their eyes on stealing other users game items or money because it is much easier to earn money than traditional gold-farming by running game bots. Game companies provide various security measures to block account- theft attempts, but many security measures on the user-side are disregarded by users because of lack of usability. In this study, we propose a server-side account theft detection system base on action sequence analysis to protect game users from malicious hackers. We tested this system in the real Massively Multiplayer Online Role Playing Game (MMORPG). By analyzing users full game play log, our system can find the particular action sequences of hackers with high accuracy. Also, we can trace where the victim accounts stolen money goes.Comment: 7 pages, 8 figures, In Proceedings of the 15th Annual Workshop on Network and Systems Support for Games (NetGames 2017

    Motivations, Classification and Model Trial of Conversational Agents for Insurance Companies

    Full text link
    Advances in artificial intelligence have renewed interest in conversational agents. So-called chatbots have reached maturity for industrial applications. German insurance companies are interested in improving their customer service and digitizing their business processes. In this work we investigate the potential use of conversational agents in insurance companies by determining which classes of agents are of interest to insurance companies, finding relevant use cases and requirements, and developing a prototype for an exemplary insurance scenario. Based on this approach, we derive key findings for conversational agent implementation in insurance companies.Comment: 12 pages, 6 figure, accepted for presentation at The International Conference on Agents and Artificial Intelligence 2019 (ICAART 2019

    Spyware vs. Spyware: Software Conflicts and User Autonomy

    Get PDF

    FraudDroid: Automated Ad Fraud Detection for Android Apps

    Get PDF
    Although mobile ad frauds have been widespread, state-of-the-art approaches in the literature have mainly focused on detecting the so-called static placement frauds, where only a single UI state is involved and can be identified based on static information such as the size or location of ad views. Other types of fraud exist that involve multiple UI states and are performed dynamically while users interact with the app. Such dynamic interaction frauds, although now widely spread in apps, have not yet been explored nor addressed in the literature. In this work, we investigate a wide range of mobile ad frauds to provide a comprehensive taxonomy to the research community. We then propose, FraudDroid, a novel hybrid approach to detect ad frauds in mobile Android apps. FraudDroid analyses apps dynamically to build UI state transition graphs and collects their associated runtime network traffics, which are then leveraged to check against a set of heuristic-based rules for identifying ad fraudulent behaviours. We show empirically that FraudDroid detects ad frauds with a high precision (93%) and recall (92%). Experimental results further show that FraudDroid is capable of detecting ad frauds across the spectrum of fraud types. By analysing 12,000 ad-supported Android apps, FraudDroid identified 335 cases of fraud associated with 20 ad networks that are further confirmed to be true positive results and are shared with our fellow researchers to promote advanced ad fraud detectionComment: 12 pages, 10 figure

    Slither.io Deep Learning Bot

    Get PDF
    Recent advances in deep learning and computer vision techniques and algorithms have inspired me to create a model application. The game environment used is Slither.io. The system has no previous understanding of the game and is able to learn its surroundings through feature detection and deep learning. Contrary to other agents, my bot is able to dynamically learn and react to its environment. It operates extremely well in early game, with little enemy encounters. It has difficulty transitioning to middle and late game due to limited training time. I will continue to develop this algorithm

    PROVOKE : Toxicity trigger detection in conversations from the top 100 subreddits

    Get PDF
    Promoting healthy discourse on community-based online platforms like Reddit can be challenging, especially when conversations show ominous signs of toxicity. Therefore, in this study, we find the turning points (i.e., toxicity triggers) making conversations toxic. Before finding toxicity triggers, we built and evaluated various machine learning models to detect toxicity from Reddit comments. Subsequently, we used our best-performing model, a fine-tuned Bidirectional Encoder Representations from Transformers (BERT) model that achieved an area under the receiver operating characteristic curve (AUC) score of 0.983 to detect toxicity. Next, we constructed conversation threads and used the toxicity prediction results to build a training set for detecting toxicity triggers. This procedure entailed using our large-scale dataset to refine toxicity triggers' definition and build a trigger detection dataset using 991,806 conversation threads from the top 100 communities on Reddit. Then, we extracted a set of sentiment shift, topical shift, and context-based features from the trigger detection dataset, using them to build a dual embedding biLSTM neural network that achieved an AUC score of 0.789. Our trigger detection dataset analysis showed that specific triggering keywords are common across all communities, like ‘racist’ and ‘women’. In contrast, other triggering keywords are specific to certain communities, like ‘overwatch’ in r/Games. Implications are that toxicity trigger detection algorithms can leverage generic approaches but must also tailor detections to specific communities.© 2022 Wuhan University. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)fi=vertaisarvioitu|en=peerReviewed

    Customisable chatbot as a research instrument

    Get PDF
    Abstract. Chatbots are proliferating rapidly online for a variety of different purposes. This thesis presents a customisable chatbot that was designed and developed as a research instrument for online customer interaction research. The developed chatbot facilitates creation of different bot personas, data management tools, and a fully functional online chat user interface. Customer-facing bots in the system are rulebased, with basic input processing and text response selection based on best match. The system uses its own database to store user-chatbot dialogue history. Further, bots can be assigned unique dialogue scripts and their profiles can be customised concerning name, description and profile image. In the presented validation studies, participants completed a task by taking part in a conversation with different bots, as hosted by the system and invoked through distinct URL parameters. Second, the participants filled in a questionnaire on their experience with the bot, designed to reveal differences in how the bots were perceived. Our results suggest that the chatbot’s personality impacted how customers experienced the interactions. Therefore, the developed system can facilitate research scenarios that deal with investigating participant responses to different chatbot personas. Future work is necessary for a wider range of applications and enhanced response control.Personoitava chatbot tutkimustyökaluna. Tiivistelmä. Chatbotit yleistyvät nopeasti Internetissä ja niitä käytetään enenevissä määrin useissa eri käyttötarkoituksissa. Tämä diplomityö esittelee personoitavan chatbotin, joka on kehitetty tutkimustyökaluksi verkon yli tapahtuvaan vuorovaikutustutkimukseen. Kehitetty chatbot sisältää erilaisten bottipersoonien luonnin, apuvälineitä datan käsittelyn, ja itse botin käyttöliittymän. Järjestelmän käyttäjille vastailevat bottipersoonat ovat sääntöihin perustuvia, niiden syötteet käsitellään suoraviivaisesti ja vastaukseksi valitaan vertailun mukaan paras ennaltamääritellyn skriptin mukaisesti. Järjestelmä käyttää omaa tietokantaa tallentamaan käyttäjä-botti keskusteluhistorian. Lisäksi boteille voidaan asettaa uniikki dialogimalli, ja niiden profiilista voidaan personoida URL-parametrillä nimi, botin kuvaus ja profiilikuva. Chatbotin tekninen toiminta todettiin tutkimuksella, jossa osallistujat suorittivat annetun tehtävän seuraamalla osittain valmista käsikirjoitusta eri bottien kanssa. Tämän jälkeen osallistujat täyttivät käyttäjäkyselyn liittyen heidän kokemukseensa botin kanssa. Kysely oli suunniteltu paljastamaan mahdolliset eroavaisuudet siinä, kuinka botin käyttäytyminen miellettiin keskustelun aikana. Käyttäjätestin tulokset viittaavat siihen, että chatbotin persoonalla oli vaikutus käyttäjien kokemukseen. Kehitetty järjestelmä siis pystyy mahdollistamaan tutkimusasetelmia, joissa tutkitaan osallistujien reaktioita erilaisten chattibottien persooniin. Jatkotyö kehitetyn chatbotin yhteydessä keskittyy monimutkaisempien käyttötarkoitusten lisäämiseen ja botin vastausten parantamiseen edistyksellisemmän luonnollisen kielen käsittelyn avulla

    Demystifying Social Bots: On the Intelligence of Automated Social Media Actors

    Get PDF
    Recently, social bots, (semi-) automatized accounts in social media, gained global attention in the context of public opinion manipulation. Dystopian scenarios like the malicious amplification of topics, the spreading of disinformation, and the manipulation of elections through “opinion machines” created headlines around the globe. As a consequence, much research effort has been put into the classification and detection of social bots. Yet, it is still unclear how easy an average online media user can purchase social bots, which platforms they target, where they originate from, and how sophisticated these bots are. This work provides a much needed new perspective on these questions. By providing insights into the markets of social bots in the clearnet and darknet as well as an exhaustive analysis of freely available software tools for automation during the last decade, we shed light on the availability and capabilities of automated profiles in social media platforms. Our results confirm the increasing importance of social bot technology but also uncover an as yet unknown discrepancy of theoretical and practically achieved artificial intelligence in social bots: while literature reports on a high degree of intelligence for chat bots and assumes the same for social bots, the observed degree of intelligence in social bot implementations is limited. In fact, the overwhelming majority of available services and software are of supportive nature and merely provide modules of automation instead of fully fledged “intelligent” social bots
    corecore