4 research outputs found

    Cheating Prevention in Peer-to-Peer-based Massively Multiuser Virtual Environments

    Get PDF
    Massively multiuser virtual environments (MMVEs) have become an increasingly popular Internet application in recent years. Until now, they are all based on client/server technology. Due to its inherent lack of scalability, realizing MMVEs based on peer-to-peer technology has received a lot of interest. From the perspective of the operator, using peer-to-peer technology raises additional challenges: the lack of trust in peers and their unreliability. The simulation of the virtual environment is governed by certain rules specified by the operator. These rules state what actions can be taken by users in the virtual environment and how the state of the environment changes based on these actions. Since MMVEs are very often competitive environments, some people will cheat and try to break the rules to get an unfair advantage over others. Using a central server performing the simulation of the virtual environment, the operator can ensure only allowed actions can be performed and the state of the environment evolves according to the rules. In a peer-to-peer setting, the operator has no control over the peers so they might not behave as implemented by the operator. Furthermore, a central server is inherently more reliable than a peer which could fail at any time so data might be lost. This thesis presents the design of a storage performing a distributed simulation of a virtual environment. It uses a deterministic event-based simulation to calculate the state of the virtual environment only based on the actions of its users. There are multiple replicated simulations using a voting mechanism to overcome the influence of malicious peers trying to tamper with the state of the environment as long as the number of malicious peers does not reach a critical threshold. Replication of data also ensures data is not lost when peers fail. The storage is based on a peer-to-peer overlay allowing peers to exchange messages to store and retrieve data. It creates a Delaunay graph structure matching the way the data in the virtual environment is distributed among the peers. A self-stabilizing algorithm keeps the structure intact as peers join and leave the network. Additional routing tables allow peers to retrieve stored replicas independently on short, disjoint paths reducing the influence of malicious peers on the retrieval of data. A redundant filling algorithm prevents malicious peers from tampering with these routing tables to get more messages routed their way

    Update propagation for peer-to-peer-based massively multi-user virtual environments

    Full text link
    Over the last decade Massively Multi-user Virtual Environments (MMVEs) have become an integral part of modern culture and business. Applications for these large-scale virtual environments range from gaming to business and scientific research. Some MMVEs reach a user base in the tens of millions and the total number of users is estimated in the billions. Despite this success, launching an MMVEs is still a risky proposition. This is in large part due to the high cost associated with setting up and maintaining the necessary server infrastructure. One way of reducing the costs of operating MMVEs is to switch their system architecture from the current client/server-based model to one based on peer-to-peer (P2P) technologies. This has the potential to significantly reduce the infrastructure costs of MMVEs, as users bring their own resources into the P2P system and servers are no longer required, thus decreasing expenses and market entry barriers. This thesis describes a scalable and low-latency update propagation system for P2P-based MMVEs. Update propagation refers to the exchange of information about changes in the virtual environment between users and is one of the key components of MMVEs. Thus, the described system represents a key step towards operating MMVEs as fully distributed peer-to-peer systems

    Enhancing trustability in MMOGs environments

    Get PDF
    Massively Multiplayer Online Games (MMOGs; e.g., World of Warcraft), virtual worlds (VW; e.g., Second Life), social networks (e.g., Facebook) strongly demand for more autonomic, security, and trust mechanisms in a way similar to humans do in the real life world. As known, this is a difficult matter because trusting in humans and organizations depends on the perception and experience of each individual, which is difficult to quantify or measure. In fact, these societal environments lack trust mechanisms similar to those involved in humans-to-human interactions. Besides, interactions mediated by compute devices are constantly evolving, requiring trust mechanisms that keep the pace with the developments and assess risk situations. In VW/MMOGs, it is widely recognized that users develop trust relationships from their in-world interactions with others. However, these trust relationships end up not being represented in the data structures (or databases) of such virtual worlds, though they sometimes appear associated to reputation and recommendation systems. In addition, as far as we know, the user is not provided with a personal trust tool to sustain his/her decision making while he/she interacts with other users in the virtual or game world. In order to solve this problem, as well as those mentioned above, we propose herein a formal representation of these personal trust relationships, which are based on avataravatar interactions. The leading idea is to provide each avatar-impersonated player with a personal trust tool that follows a distributed trust model, i.e., the trust data is distributed over the societal network of a given VW/MMOG. Representing, manipulating, and inferring trust from the user/player point of view certainly is a grand challenge. When someone meets an unknown individual, the question is “Can I trust him/her or not?”. It is clear that this requires the user to have access to a representation of trust about others, but, unless we are using an open source VW/MMOG, it is difficult —not to say unfeasible— to get access to such data. Even, in an open source system, a number of users may refuse to pass information about its friends, acquaintances, or others. Putting together its own data and gathered data obtained from others, the avatar-impersonated player should be able to come across a trust result about its current trustee. For the trust assessment method used in this thesis, we use subjective logic operators and graph search algorithms to undertake such trust inference about the trustee. The proposed trust inference system has been validated using a number of OpenSimulator (opensimulator.org) scenarios, which showed an accuracy increase in evaluating trustability of avatars. Summing up, our proposal aims thus to introduce a trust theory for virtual worlds, its trust assessment metrics (e.g., subjective logic) and trust discovery methods (e.g., graph search methods), on an individual basis, rather than based on usual centralized reputation systems. In particular, and unlike other trust discovery methods, our methods run at interactive rates.MMOGs (Massively Multiplayer Online Games, como por exemplo, World of Warcraft), mundos virtuais (VW, como por exemplo, o Second Life) e redes sociais (como por exemplo, Facebook) necessitam de mecanismos de confiança mais autónomos, capazes de assegurar a segurança e a confiança de uma forma semelhante à que os seres humanos utilizam na vida real. Como se sabe, esta não é uma questão fácil. Porque confiar em seres humanos e ou organizações depende da percepção e da experiência de cada indivíduo, o que é difícil de quantificar ou medir à partida. Na verdade, esses ambientes sociais carecem dos mecanismos de confiança presentes em interacções humanas presenciais. Além disso, as interacções mediadas por dispositivos computacionais estão em constante evolução, necessitando de mecanismos de confiança adequados ao ritmo da evolução para avaliar situações de risco. Em VW/MMOGs, é amplamente reconhecido que os utilizadores desenvolvem relações de confiança a partir das suas interacções no mundo com outros. No entanto, essas relações de confiança acabam por não ser representadas nas estruturas de dados (ou bases de dados) do VW/MMOG específico, embora às vezes apareçam associados à reputação e a sistemas de reputação. Além disso, tanto quanto sabemos, ao utilizador não lhe é facultado nenhum mecanismo que suporte uma ferramenta de confiança individual para sustentar o seu processo de tomada de decisão, enquanto ele interage com outros utilizadores no mundo virtual ou jogo. A fim de resolver este problema, bem como os mencionados acima, propomos nesta tese uma representação formal para essas relações de confiança pessoal, baseada em interacções avatar-avatar. A ideia principal é fornecer a cada jogador representado por um avatar uma ferramenta de confiança pessoal que segue um modelo de confiança distribuída, ou seja, os dados de confiança são distribuídos através da rede social de um determinado VW/MMOG. Representar, manipular e inferir a confiança do ponto de utilizador/jogador, é certamente um grande desafio. Quando alguém encontra um indivíduo desconhecido, a pergunta é “Posso confiar ou não nele?”. É claro que isto requer que o utilizador tenha acesso a uma representação de confiança sobre os outros, mas, a menos que possamos usar uma plataforma VW/MMOG de código aberto, é difícil — para não dizer impossível — obter acesso aos dados gerados pelos utilizadores. Mesmo em sistemas de código aberto, um número de utilizadores pode recusar partilhar informações sobre seus amigos, conhecidos, ou sobre outros. Ao juntar seus próprios dados com os dados obtidos de outros, o utilizador/jogador representado por um avatar deve ser capaz de produzir uma avaliação de confiança sobre o utilizador/jogador com o qual se encontra a interagir. Relativamente ao método de avaliação de confiança empregue nesta tese, utilizamos lógica subjectiva para a representação da confiança, e também operadores lógicos da lógica subjectiva juntamente com algoritmos de procura em grafos para empreender o processo de inferência da confiança relativamente a outro utilizador. O sistema de inferência de confiança proposto foi validado através de um número de cenários Open-Simulator (opensimulator.org), que mostrou um aumento na precisão na avaliação da confiança de avatares. Resumindo, a nossa proposta visa, assim, introduzir uma teoria de confiança para mundos virtuais, conjuntamente com métricas de avaliação de confiança (por exemplo, a lógica subjectiva) e em métodos de procura de caminhos de confiança (com por exemplo, através de métodos de pesquisa em grafos), partindo de uma base individual, em vez de se basear em sistemas habituais de reputação centralizados. Em particular, e ao contrário de outros métodos de determinação do grau de confiança, os nossos métodos são executados em tempo real
    corecore