14 research outputs found

    Prevention vs detection in online game cheating

    Get PDF
    Abstract. Cheating is a major problem in online games, but solving this would require either a complicated architecture design, costly third-party anti-cheat, or both. This paper aims to explore the differences between preventive and detective solutions against online game cheating. Specifically, it explores solutions against software-based cheatings, what kind of cheats there are, and what proposed and implemented solutions there are. This paper was conducted using literature reviews as methodology, using relevant papers from databases such as ResearchGate, ACM, and IEEE. In this paper, it was concluded that a good prevention strategy during the game development phase is adequate to mitigate and prevent cheating but will require appropriate anti-cheat software to maintain fairness during the lifetime of the game. The importance of an online game’s network architecture choice in preventing cheating became apparent within this paper after comparing the benefits of each type side-by-side. Results showed that peer-to-peer architecture not having a trusted centralized authority means that the game needs to rely more on an anti-cheat software to prevent and detect cheating. This paper could not conclude what an appropriate anti-cheat software is because the topic is outside of the scope of this paper and lacks public data. Still, it does raise the question of whether a more aggressive anti-cheat strategy is suitable for a game or not

    Comparison between Famous Game Engines and Eminent Games

    Get PDF
    Nowadays game engines are imperative for building 3D applications and games. This is for the reason that the engines appreciably reduce resources for employing obligatory but intricate utilities. This paper elucidates about a game engine, popular games developed by these engines and its foremost elements. It portrays a number of special kinds of contemporary game developed by engines in the way of their aspects, procedure and deliberates their stipulations with comparison

    An assessment of deep learning models and word embeddings for toxicity detection within online textual comments

    Get PDF
    Today, increasing numbers of people are interacting online and a lot of textual comments are being produced due to the explosion of online communication. However, a paramount inconvenience within online environments is that comments that are shared within digital platforms can hide hazards, such as fake news, insults, harassment, and, more in general, comments that may hurt someone’s feelings. In this scenario, the detection of this kind of toxicity has an important role to moderate online communication. Deep learning technologies have recently delivered impressive performance within Natural Language Processing applications encompassing Sentiment Analysis and emotion detection across numerous datasets. Such models do not need any pre-defined hand-picked features, but they learn sophisticated features from the input datasets by themselves. In such a domain, word embeddings have been widely used as a way of representing words in Sentiment Analysis tasks, proving to be very effective. Therefore, in this paper, we investigated the use of deep learning and word embeddings to detect six different types of toxicity within online comments. In doing so, the most suitable deep learning layers and state-of-the-art word embeddings for identifying toxicity are evaluated. The results suggest that Long-Short Term Memory layers in combination with mimicked word embeddings are a good choice for this task

    Passage Ă  l'Ă©chelle pour les mondes virtuels

    Get PDF
    Virtual worlds attract millions of users and these popular applications --supported by gigantic data centers with myriads of processors-- are routinely accessed. However, surprisingly, virtual worlds are still unable to host simultaneously more than a few hundred users in the same contiguous space.The main contribution of the thesis is Kiwano, a distributed system enabling an unlimited number of avatars to simultaneously evolve and interact in a contiguous virtual space. In Kiwano we employ the Delaunay triangulation to provide each avatar with a constant number of neighbors independently of their density or distribution. The avatar-to-avatar interactions and related computations are then bounded, allowing the system to scale. The load is constantly balanced among Kiwano's nodes which adapt and take in charge sets of avatars according to their geographic proximity. The optimal number of avatars per CPU and the performances of our system have been evaluated simulating tens of thousands of avatars connecting to a Kiwano instance running across several data centers, with results well beyond the current state-of-the-art.We also propose Kwery, a distributed spatial index capable to scale dynamic objects of virtual worlds. Kwery performs efficient reverse geolocation queries on large numbers of moving objects updating their position at arbitrary high frequencies. We use a distributed spatial index on top of a self-adaptive tree structure. Each node of the system hosts and answers queries on a group of objects in a zone, which is the minimal axis-aligned rectangle. They are chosen based on their proximity and the load of the node. Spatial queries are then answered only by the nodes with meaningful zones, that is, where the node's zone intersects the query zone.Kiwano has been successfully implemented for HybridEarth, a mixed reality world, Manycraft, our scalable multiplayer Minecraft map, and discussed for OneSim, a distributed Second Life architecture. By handling avatars separately, we show interoperability between these virtual worlds.With Kiwano and Kwery we provide the first massively distributed and self-adaptive solutions for virtual worlds suitable to run in the cloud. The results, in terms of number of avatars per CPU, exceed by orders of magnitude the performances of current state-of-the-art implementations. This indicates Kiwano to be a cost effective solution for the industry. The open API for our first implementation is available at \url{http://kiwano.li}.La réalité mixe, les jeux en ligne massivement multijoueur (MMOGs), les mondes virtuels et le cyberespace sont des concepts extrêmement attractifs. Mais leur déploiement à large échelle reste difficile et il est en conséquence souvent évité.La contribution principale de la thèse réside dans le système distribué Kiwano, qui permet à un nombre illimité d'avatars de peupler et d'interagir simultanément dans un même monde contigu. Dans Kiwano nous utilisons la triangulation de Delaunay pour fournir à chaque avatar un nombre constant de voisins en moyenne, indépendamment de leur densité ou distribution géographique. Le nombre d'interactions entre les avatars et les calculs inhérents sont bornés, ce qui permet le passage à l'échelle du système.La charge est repartie sur plusieurs machines qui regroupent sur un même nœud les avatars voisins de façon contiguë dans le graphe de Delaunay. L'équilibrage de la charge se fait de manière contiguë et dynamique, en suivant la philosophie des réseaux pair-à-pair (peer-to-peer overlays). Cependant ce principe est adapté au contexte de l'informatique dématérialisée (cloud computing).Le nombre optimal d'avatars par CPU et les performances de notre système ont été évalués en simulant des dizaines de milliers d'avatars connectés à la même instance de Kiwano tournant à travers plusieurs centres de traitement de données.Nous proposons également trois applications concrètes qui utilisent Kiwano : Manycraft est une architecture distribuée capable de supporter un nombre arbitrairement grand d'utilisateurs cohabitant dans le même espace Minecraft, OneSim, qui permet à un nombre illimité d'usagers d'être ensemble dans la même région de Second Life et HybridEarth, un monde en réalité mixte où avatars et personnes physiques sont présents et interagissent dans un même espace: la Terre

    A Systematic Review of Multimedia Tools for Cybersecurity Awareness and Education

    Get PDF
    © {Leah Zhang-Kennedy, Sonia Chiasson ​| ACM} {2021}. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in {ACM Computing Surveys}, https://doi.org/10.1145/3427920.We conduct a comprehensive review covering academic publications and industry products relating to tools for cybersecurity awareness and education aimed at non-expert end-users developed in the past 20 years. Through our search criteria, we identified 119 tools that we cataloged into five broad media categories. We explore current trends, assess their use of relevant instructional design principles, and review empirical evi dence of the tools’ effectiveness. From our review, we provide an evaluation checklist and suggest that a more systematic approach to the design and evaluation of cybersecurity educational tools would be beneficial

    Enhancing trustability in MMOGs environments

    Get PDF
    Massively Multiplayer Online Games (MMOGs; e.g., World of Warcraft), virtual worlds (VW; e.g., Second Life), social networks (e.g., Facebook) strongly demand for more autonomic, security, and trust mechanisms in a way similar to humans do in the real life world. As known, this is a difficult matter because trusting in humans and organizations depends on the perception and experience of each individual, which is difficult to quantify or measure. In fact, these societal environments lack trust mechanisms similar to those involved in humans-to-human interactions. Besides, interactions mediated by compute devices are constantly evolving, requiring trust mechanisms that keep the pace with the developments and assess risk situations. In VW/MMOGs, it is widely recognized that users develop trust relationships from their in-world interactions with others. However, these trust relationships end up not being represented in the data structures (or databases) of such virtual worlds, though they sometimes appear associated to reputation and recommendation systems. In addition, as far as we know, the user is not provided with a personal trust tool to sustain his/her decision making while he/she interacts with other users in the virtual or game world. In order to solve this problem, as well as those mentioned above, we propose herein a formal representation of these personal trust relationships, which are based on avataravatar interactions. The leading idea is to provide each avatar-impersonated player with a personal trust tool that follows a distributed trust model, i.e., the trust data is distributed over the societal network of a given VW/MMOG. Representing, manipulating, and inferring trust from the user/player point of view certainly is a grand challenge. When someone meets an unknown individual, the question is “Can I trust him/her or not?”. It is clear that this requires the user to have access to a representation of trust about others, but, unless we are using an open source VW/MMOG, it is difficult —not to say unfeasible— to get access to such data. Even, in an open source system, a number of users may refuse to pass information about its friends, acquaintances, or others. Putting together its own data and gathered data obtained from others, the avatar-impersonated player should be able to come across a trust result about its current trustee. For the trust assessment method used in this thesis, we use subjective logic operators and graph search algorithms to undertake such trust inference about the trustee. The proposed trust inference system has been validated using a number of OpenSimulator (opensimulator.org) scenarios, which showed an accuracy increase in evaluating trustability of avatars. Summing up, our proposal aims thus to introduce a trust theory for virtual worlds, its trust assessment metrics (e.g., subjective logic) and trust discovery methods (e.g., graph search methods), on an individual basis, rather than based on usual centralized reputation systems. In particular, and unlike other trust discovery methods, our methods run at interactive rates.MMOGs (Massively Multiplayer Online Games, como por exemplo, World of Warcraft), mundos virtuais (VW, como por exemplo, o Second Life) e redes sociais (como por exemplo, Facebook) necessitam de mecanismos de confiança mais autónomos, capazes de assegurar a segurança e a confiança de uma forma semelhante à que os seres humanos utilizam na vida real. Como se sabe, esta não é uma questão fácil. Porque confiar em seres humanos e ou organizações depende da percepção e da experiência de cada indivíduo, o que é difícil de quantificar ou medir à partida. Na verdade, esses ambientes sociais carecem dos mecanismos de confiança presentes em interacções humanas presenciais. Além disso, as interacções mediadas por dispositivos computacionais estão em constante evolução, necessitando de mecanismos de confiança adequados ao ritmo da evolução para avaliar situações de risco. Em VW/MMOGs, é amplamente reconhecido que os utilizadores desenvolvem relações de confiança a partir das suas interacções no mundo com outros. No entanto, essas relações de confiança acabam por não ser representadas nas estruturas de dados (ou bases de dados) do VW/MMOG específico, embora às vezes apareçam associados à reputação e a sistemas de reputação. Além disso, tanto quanto sabemos, ao utilizador não lhe é facultado nenhum mecanismo que suporte uma ferramenta de confiança individual para sustentar o seu processo de tomada de decisão, enquanto ele interage com outros utilizadores no mundo virtual ou jogo. A fim de resolver este problema, bem como os mencionados acima, propomos nesta tese uma representação formal para essas relações de confiança pessoal, baseada em interacções avatar-avatar. A ideia principal é fornecer a cada jogador representado por um avatar uma ferramenta de confiança pessoal que segue um modelo de confiança distribuída, ou seja, os dados de confiança são distribuídos através da rede social de um determinado VW/MMOG. Representar, manipular e inferir a confiança do ponto de utilizador/jogador, é certamente um grande desafio. Quando alguém encontra um indivíduo desconhecido, a pergunta é “Posso confiar ou não nele?”. É claro que isto requer que o utilizador tenha acesso a uma representação de confiança sobre os outros, mas, a menos que possamos usar uma plataforma VW/MMOG de código aberto, é difícil — para não dizer impossível — obter acesso aos dados gerados pelos utilizadores. Mesmo em sistemas de código aberto, um número de utilizadores pode recusar partilhar informações sobre seus amigos, conhecidos, ou sobre outros. Ao juntar seus próprios dados com os dados obtidos de outros, o utilizador/jogador representado por um avatar deve ser capaz de produzir uma avaliação de confiança sobre o utilizador/jogador com o qual se encontra a interagir. Relativamente ao método de avaliação de confiança empregue nesta tese, utilizamos lógica subjectiva para a representação da confiança, e também operadores lógicos da lógica subjectiva juntamente com algoritmos de procura em grafos para empreender o processo de inferência da confiança relativamente a outro utilizador. O sistema de inferência de confiança proposto foi validado através de um número de cenários Open-Simulator (opensimulator.org), que mostrou um aumento na precisão na avaliação da confiança de avatares. Resumindo, a nossa proposta visa, assim, introduzir uma teoria de confiança para mundos virtuais, conjuntamente com métricas de avaliação de confiança (por exemplo, a lógica subjectiva) e em métodos de procura de caminhos de confiança (com por exemplo, através de métodos de pesquisa em grafos), partindo de uma base individual, em vez de se basear em sistemas habituais de reputação centralizados. Em particular, e ao contrário de outros métodos de determinação do grau de confiança, os nossos métodos são executados em tempo real

    Model-driven Personalisation of Human-Computer Interaction across Ubiquitous Computing Applications

    Get PDF
    Personalisation is essential to Ubiquitous Computing (Ubicomp), which focuses on a human-centred paradigm aiming to provide interaction with adaptive content, services, and interfaces towards each one of its users, according to the context of the applications’ scenarios. However, the provision of that appropriated personalised interaction is a true challenge due to different reasons, such as the user interests, heterogeneous environments and devices, dynamic user behaviour and data capture. This dissertation focuses on a model-driven personalisation solution that has the main goal of facili-tating the implementation of a personalised human-computer interaction across different Ubicomp scenarios and applications. The research reported here investigates how a generic and interoperable model for personalisation can be used, shared and processed by different applications, among diverse devices, and across different scenarios, studying how it can enrich human-computer interaction. The research started by the definition of a consistent user model with the integration of context to end in a pervasive model for the definition of personalisations across different applications. Besides the model proposal, the other key contributions within the solution are the modelling frame-work, which encapsulates the model and integrates the user profiling module, and a cloud-based platform to pervasively support developers in the implementation of personalisation across different applications and scenarios. This platform provides tools to put end users in control of their data and to support developers through web services based operations implemented on top of a personalisa-tion API, which can also be used independently of the platform for testing purposes, for instance. Several Ubicomp applications prototypes were designed and used to evaluate, at different phases, both the solution as a whole and each one of its components. Some were specially created with the goal of evaluating specific research questions of this work. Others were being developed with a pur-pose other than for personalisation evaluation, but they ended up as personalised prototypes to better address their initial goals. The process of applying the personalisation model to the design of the latter should also work as a proof of concept on the developer side. On the one hand, developers have been probed with the implementation of personalised applications using the proposed solution, or a part of it, to assess how it works and can help them. The usage of our solution by developers was also important to assess how the model and the platform respond to the developers’ needs. On the other hand, some prototypes that implement our model-driven per-sonalisation solution have been selected for end user evaluation. Usually, user testing was conducted at two different stages of the development, using: (1) a non-personalised version; (2) the final per-sonalised version. This procedure allowed us to assess if personalisation improved the human-com-puter interaction. The first stage was also important to know who were the end users and gather interaction data to come up with personalisation proposals for each prototype. Globally, the results of both developers and end users tests were very positive. Finally, this dissertation proposes further work, which is already ongoing, related to the study of a methodology to the implementation and evaluation of personalised applications, supported by the development of three mobile health applications for rehabilitation

    Customizable teaching on mobile devices in higher education

    Full text link
    Every teacher struggles with the student’s attention when giving a lecture. It is not easy to meet every single student at its knowledge level in a seminar group, and it becomes nearly impossible in huge university lectures with hundreds of students. With the spread of mobile devices among students, Audience Response Systems (ARS) proved as an easy and cheap solution to activate the audience and to compare the students’ real knowledge base with the lecturer’s estimation. Today, lecturers are able to choose between a huge variety of different ARSs. But as every lecturer has a very individual teaching style, he or she is not yet able to create or customize their individual audience response teaching scenario within a single system. The available systems are quite similar to each other and mostly support only a handful of different scenarios. Therefore, this work identified the abstract core elements of ARSs and developed a model to create individual and customizable scenarios for the students’ mobile devices. Teachers become able to build their individual application, define the appearance on the students’ phones in a scenario construction kit and even determine the scenario’s behavior logic. Two ARS applications were implemented and used to evaluate the model in real lectures for the last four years. A first ARS was integrated into the university’s learning management system ILIAS and provided lecturers with basic question functionalities, whereas a second and more advanced stand-alone version enabled lecturers to use personal scenarios in a variety of lecture settings. Hence, scenarios like quizzes, message boards, teacher feedback and live experiments became possible. The approaches were evaluated from a technical, student and lecturer perspective in various courses of different areas and sizes. The new model showed great results and potential in customization, but the implementation reached its limits as it lacked in performance scalability for complex scenarios with a large amount of students

    Actas da 10ÂŞ ConferĂŞncia sobre Redes de Computadores

    Get PDF
    Universidade do MinhoCCTCCentro AlgoritmiCisco SystemsIEEE Portugal Sectio
    corecore