325 research outputs found

    Enhancing trustability in MMOGs environments

    Get PDF
    Massively Multiplayer Online Games (MMOGs; e.g., World of Warcraft), virtual worlds (VW; e.g., Second Life), social networks (e.g., Facebook) strongly demand for more autonomic, security, and trust mechanisms in a way similar to humans do in the real life world. As known, this is a difficult matter because trusting in humans and organizations depends on the perception and experience of each individual, which is difficult to quantify or measure. In fact, these societal environments lack trust mechanisms similar to those involved in humans-to-human interactions. Besides, interactions mediated by compute devices are constantly evolving, requiring trust mechanisms that keep the pace with the developments and assess risk situations. In VW/MMOGs, it is widely recognized that users develop trust relationships from their in-world interactions with others. However, these trust relationships end up not being represented in the data structures (or databases) of such virtual worlds, though they sometimes appear associated to reputation and recommendation systems. In addition, as far as we know, the user is not provided with a personal trust tool to sustain his/her decision making while he/she interacts with other users in the virtual or game world. In order to solve this problem, as well as those mentioned above, we propose herein a formal representation of these personal trust relationships, which are based on avataravatar interactions. The leading idea is to provide each avatar-impersonated player with a personal trust tool that follows a distributed trust model, i.e., the trust data is distributed over the societal network of a given VW/MMOG. Representing, manipulating, and inferring trust from the user/player point of view certainly is a grand challenge. When someone meets an unknown individual, the question is “Can I trust him/her or not?”. It is clear that this requires the user to have access to a representation of trust about others, but, unless we are using an open source VW/MMOG, it is difficult —not to say unfeasible— to get access to such data. Even, in an open source system, a number of users may refuse to pass information about its friends, acquaintances, or others. Putting together its own data and gathered data obtained from others, the avatar-impersonated player should be able to come across a trust result about its current trustee. For the trust assessment method used in this thesis, we use subjective logic operators and graph search algorithms to undertake such trust inference about the trustee. The proposed trust inference system has been validated using a number of OpenSimulator (opensimulator.org) scenarios, which showed an accuracy increase in evaluating trustability of avatars. Summing up, our proposal aims thus to introduce a trust theory for virtual worlds, its trust assessment metrics (e.g., subjective logic) and trust discovery methods (e.g., graph search methods), on an individual basis, rather than based on usual centralized reputation systems. In particular, and unlike other trust discovery methods, our methods run at interactive rates.MMOGs (Massively Multiplayer Online Games, como por exemplo, World of Warcraft), mundos virtuais (VW, como por exemplo, o Second Life) e redes sociais (como por exemplo, Facebook) necessitam de mecanismos de confiança mais autĂłnomos, capazes de assegurar a segurança e a confiança de uma forma semelhante Ă  que os seres humanos utilizam na vida real. Como se sabe, esta nĂŁo Ă© uma questĂŁo fĂĄcil. Porque confiar em seres humanos e ou organizaçÔes depende da percepção e da experiĂȘncia de cada indivĂ­duo, o que Ă© difĂ­cil de quantificar ou medir Ă  partida. Na verdade, esses ambientes sociais carecem dos mecanismos de confiança presentes em interacçÔes humanas presenciais. AlĂ©m disso, as interacçÔes mediadas por dispositivos computacionais estĂŁo em constante evolução, necessitando de mecanismos de confiança adequados ao ritmo da evolução para avaliar situaçÔes de risco. Em VW/MMOGs, Ă© amplamente reconhecido que os utilizadores desenvolvem relaçÔes de confiança a partir das suas interacçÔes no mundo com outros. No entanto, essas relaçÔes de confiança acabam por nĂŁo ser representadas nas estruturas de dados (ou bases de dados) do VW/MMOG especĂ­fico, embora Ă s vezes apareçam associados Ă  reputação e a sistemas de reputação. AlĂ©m disso, tanto quanto sabemos, ao utilizador nĂŁo lhe Ă© facultado nenhum mecanismo que suporte uma ferramenta de confiança individual para sustentar o seu processo de tomada de decisĂŁo, enquanto ele interage com outros utilizadores no mundo virtual ou jogo. A fim de resolver este problema, bem como os mencionados acima, propomos nesta tese uma representação formal para essas relaçÔes de confiança pessoal, baseada em interacçÔes avatar-avatar. A ideia principal Ă© fornecer a cada jogador representado por um avatar uma ferramenta de confiança pessoal que segue um modelo de confiança distribuĂ­da, ou seja, os dados de confiança sĂŁo distribuĂ­dos atravĂ©s da rede social de um determinado VW/MMOG. Representar, manipular e inferir a confiança do ponto de utilizador/jogador, Ă© certamente um grande desafio. Quando alguĂ©m encontra um indivĂ­duo desconhecido, a pergunta Ă© “Posso confiar ou nĂŁo nele?”. É claro que isto requer que o utilizador tenha acesso a uma representação de confiança sobre os outros, mas, a menos que possamos usar uma plataforma VW/MMOG de cĂłdigo aberto, Ă© difĂ­cil — para nĂŁo dizer impossĂ­vel — obter acesso aos dados gerados pelos utilizadores. Mesmo em sistemas de cĂłdigo aberto, um nĂșmero de utilizadores pode recusar partilhar informaçÔes sobre seus amigos, conhecidos, ou sobre outros. Ao juntar seus prĂłprios dados com os dados obtidos de outros, o utilizador/jogador representado por um avatar deve ser capaz de produzir uma avaliação de confiança sobre o utilizador/jogador com o qual se encontra a interagir. Relativamente ao mĂ©todo de avaliação de confiança empregue nesta tese, utilizamos lĂłgica subjectiva para a representação da confiança, e tambĂ©m operadores lĂłgicos da lĂłgica subjectiva juntamente com algoritmos de procura em grafos para empreender o processo de inferĂȘncia da confiança relativamente a outro utilizador. O sistema de inferĂȘncia de confiança proposto foi validado atravĂ©s de um nĂșmero de cenĂĄrios Open-Simulator (opensimulator.org), que mostrou um aumento na precisĂŁo na avaliação da confiança de avatares. Resumindo, a nossa proposta visa, assim, introduzir uma teoria de confiança para mundos virtuais, conjuntamente com mĂ©tricas de avaliação de confiança (por exemplo, a lĂłgica subjectiva) e em mĂ©todos de procura de caminhos de confiança (com por exemplo, atravĂ©s de mĂ©todos de pesquisa em grafos), partindo de uma base individual, em vez de se basear em sistemas habituais de reputação centralizados. Em particular, e ao contrĂĄrio de outros mĂ©todos de determinação do grau de confiança, os nossos mĂ©todos sĂŁo executados em tempo real

    Performance-Engineered Network Overlays for High Quality Interaction in Virtual Worlds

    Get PDF
    Overlay hosting systems such as PlanetLab, and cloud computing environments such as Amazon’s EC2, provide shared infrastructures within which new applications can be developed and deployed on a global scale. This paper ex-plores how systems of this sort can be used to enable ad-vanced network services and sophisticated applications that use those services to enhance performance and provide a high quality user experience. Specifically, we investigate how advanced overlay hosting environments can be used to provide network services that enable scalable virtual world applications and other large-scale distributed applications requiring consistent, real-time performance. We propose a novel network architecture called Forest built around per-session tree-structured communication channels that we call comtrees. Comtrees are provisioned and support both unicast and multicast packet delivery. The multicast mechanism is designed to be highly scalable and light-weight enough to support the rapid changes to multicast subscriptions needed for efficient support of state updates within virtual worlds. We evaluate performance using a combination of analysis and experimental measurement of a partial system prototype that supports fully functional distributed game sessions. Our results provide the data needed to enable accurate projections of performance for a variety of session and system configurations

    Usability Evaluation in Virtual Environments: Classification and Comparison of Methods

    Get PDF
    Virtual environments (VEs) are a relatively new type of human-computer interface in which users perceive and act in a three-dimensional world. The designers of such systems cannot rely solely on design guidelines for traditional two-dimensional interfaces, so usability evaluation is crucial for VEs. We present an overview of VE usability evaluation. First, we discuss some of the issues that differentiate VE usability evaluation from evaluation of traditional user interfaces such as GUIs. We also present a review of VE evaluation methods currently in use, and discuss a simple classification space for VE usability evaluation methods. This classification space provides a structured means for comparing evaluation methods according to three key characteristics: involvement of representative users, context of evaluation, and types of results produced. To illustrate these concepts, we compare two existing evaluation approaches: testbed evaluation [Bowman, Johnson, & Hodges, 1999], and sequential evaluation [Gabbard, Hix, & Swan, 1999]. We conclude by presenting novel ways to effectively link these two approaches to VE usability evaluation

    New Concepts for Virtual Testbeds : Data Mining Algorithms for Blackbox Optimization based on Wait-Free Concurrency and Generative Simulation

    Get PDF
    Virtual testbeds have emerged as a key technology for improving and streamlining complex engineering processes by delivering long-term simulation and assessment of complex designs in virtual environments. In contrast to existing simulation technology, virtual testbeds focus on long-term physically-based simulation of the overall design in its (virtual) environment instead of only focussing on isolated, specific parts for short periods of time. This technology has the major advantage that costly testing, prototyping, and assessment in real-life environments are replaced by a cost-efficient simulation in virtual worlds for comprehensive and long-term analysis of designs. For this purpose, engineering models and their requirements are abstracted into software simulation models and objectives which are executed in virtual assessments. Simulation models are used to predict complex, real systems which can be further a subject to random influences. These predictions are used to examine the effects of individual configuration alternatives without actually realizing them and causing possible negative effects on the real system. Virtual testbeds further offer engineers the opportunity to immersively and naturally interact with their simulation model in these virtual assessments. This enables a greater and comprehensive understanding of possible design flaws early-on in the design process for engineers because they can directly assess their design in the virtual environment, based on the simulation objectives. The fact that virtual testbeds enable these realtime interactive virtual assessments, makes their underlying software infrastructure very complex. One major challenge is to minimize the development time of virtual testbeds in order to efficiently integrate them into the overall engineering process. Usually, this can be achieved by minimizing the underlying concurrency of the testbed and by simplifying its software architecture. However, this may result in a degradation of their very concurrent and asynchronous behavior, which is usually required for immersive and natural virtual interaction. A major goal of virtual testbeds in the engineering process is to find a set of optimal configurations of the simulation model which maximizes all simulation objectives for the specified virtual assessments. Once such a set has been computed, engineers can interactively explore it in the virtual environment. The main challenge is that sophisticated simulation models and their configuration are subject to a multiobjective optimization problem, which usually can not be solved manually by engineers or simulation analysts in feasible time. This is further aggravated because the relationships between simulation model configurations and simulation objectives are mostly unknown, leading to what is known as blackbox simulations. In this thesis, I propose novel data mining algorithms for computing Pareto optimal simulation model configurations, based on an approximation of the feasible design space, for deterministic and stochastic blackbox simulations in virtual testbeds for achieving above stated goal. These novel data mining algorithms lead to an automatic knowledge discovery process that does not need any supervision for its data analysis and assessment for multiobjective optimization problems of simulation model configurations. This achieves the previously stated goal of computing optimal configurations of simulation models for long-term simulations and assessments. Furthermore, I propose two complementary solutions for efficiently integrating massively-parallel virtual testbeds into engineering processes. First, I propose a novel multiversion wait-free data and concurrency management based on hash maps. These wait-free hash maps do not require any standard locking mechanisms and enable low-latency data generation, management and distribution for massively-parallel applications. Second, I propose novel concepts for efficiently code generating above wait-free data and concurrency management for arbitrary massively-parallel simulation applications of virtual testbeds. My generative simulation concept combines a state-of-the-art realtime interactive system design pattern for high maintainability with template code generation based on domain specific modelling. This concept is able to generate massively-parallel simulations and, at the same time, model checks its internal dataflow for possible interface errors. These generative concept overcomes the challenge of efficiently integrating virtual testbeds into engineering processes. These contributions enable for the first time a powerful collaboration between simulation, optimization, visualization and data analysis for novel virtual testbed applications but also overcome and achieve the presented challenges and goals

    Analysis of Visualisation and Interaction Tools Authors

    Get PDF
    This document provides an in-depth analysis of visualization and interaction tools employed in the context of Virtual Museum. This analysis is required to identify and design the tools and the different components that will be part of the Common Implementation Framework (CIF). The CIF will be the base of the web-based services and tools to support the development of Virtual Museums with particular attention to online Virtual Museum.The main goal is to provide to the stakeholders and developers an useful platform to support and help them in the development of their projects, despite the nature of the project itself. The design of the Common Implementation Framework (CIF) is based on an analysis of the typical workflow ofthe V-MUST partners and their perceived limitations of current technologies. This document is based also on the results of the V-MUST technical questionnaire (presented in the Deliverable 4.1). Based on these two source of information, we have selected some important tools (mainly visualization tools) and services and we elaborate some first guidelines and ideas for the design and development of the CIF, that shall provide a technological foundation for the V-MUST Platform, together with the V-MUST repository/repositories and the additional services defined in the WP4. Two state of the art reports, one about user interface design and another one about visualization technologies have been also provided in this document

    Building the Hyperconnected Society- Internet of Things Research and Innovation Value Chains, Ecosystems and Markets

    Get PDF
    This book aims to provide a broad overview of various topics of Internet of Things (IoT), ranging from research, innovation and development priorities to enabling technologies, nanoelectronics, cyber-physical systems, architecture, interoperability and industrial applications. All this is happening in a global context, building towards intelligent, interconnected decision making as an essential driver for new growth and co-competition across a wider set of markets. It is intended to be a standalone book in a series that covers the Internet of Things activities of the IERC – Internet of Things European Research Cluster from research to technological innovation, validation and deployment.The book builds on the ideas put forward by the European Research Cluster on the Internet of Things Strategic Research and Innovation Agenda, and presents global views and state of the art results on the challenges facing the research, innovation, development and deployment of IoT in future years. The concept of IoT could disrupt consumer and industrial product markets generating new revenues and serving as a growth driver for semiconductor, networking equipment, and service provider end-markets globally. This will create new application and product end-markets, change the value chain of companies that creates the IoT technology and deploy it in various end sectors, while impacting the business models of semiconductor, software, device, communication and service provider stakeholders. The proliferation of intelligent devices at the edge of the network with the introduction of embedded software and app-driven hardware into manufactured devices, and the ability, through embedded software/hardware developments, to monetize those device functions and features by offering novel solutions, could generate completely new types of revenue streams. Intelligent and IoT devices leverage software, software licensing, entitlement management, and Internet connectivity in ways that address many of the societal challenges that we will face in the next decade

    How to accommodate grief in your life

    Get PDF
    This artists’ text examines the relationship between photographic images and Massively Multiplayer Online (MMO) environments. We note that such scripted image worlds necessitate a fundamental reconsideration of the capacities of image, its formation, reproduction, storage and circulation. As an archaeologist would document an excavation, extending conventional methods through 3D visualization technology to work in new ways with the archaeological record, we chose to document a world built and razed digitally by a now dormant group of anonymous gamers called the Yung Cum Bois (YCBs). We turn to some definitions of griefer as a subcultural phenomenon within online culture to attempt to contextualize our involvement some more, thinking through the forms of image-gathering that grief play has generated, such as scripted object attacks where image-objects spawn and self-replicate, continually spurting out copies of themselves, lagging the region, slowing down frame rates, consuming land resources. Here we witness images blockading network logistics. This was active fieldwork. We got involved. We applied visualization technology learnt from archaeological computing research to the avatars, temporary structures and abandoned ruins of an online world, Second Life (SL). We patched together a kind of virtual photogrammetry, enabling the monumentalization of avatars, objects and scenarios, recompiling these into new configurations and uploading them freely to be reused, detourned and weaponized by our virtual friends. We situate this endeavour within a cobbled history of imaging technology, the networked self and its pathologies, riffling through our own image dump. Here

    Internet Predictions

    Get PDF
    More than a dozen leading experts give their opinions on where the Internet is headed and where it will be in the next decade in terms of technology, policy, and applications. They cover topics ranging from the Internet of Things to climate change to the digital storage of the future. A summary of the articles is available in the Web extras section

    Building the Hyperconnected Society- Internet of Things Research and Innovation Value Chains, Ecosystems and Markets

    Get PDF
    This book aims to provide a broad overview of various topics of Internet of Things (IoT), ranging from research, innovation and development priorities to enabling technologies, nanoelectronics, cyber-physical systems, architecture, interoperability and industrial applications. All this is happening in a global context, building towards intelligent, interconnected decision making as an essential driver for new growth and co-competition across a wider set of markets. It is intended to be a standalone book in a series that covers the Internet of Things activities of the IERC – Internet of Things European Research Cluster from research to technological innovation, validation and deployment.The book builds on the ideas put forward by the European Research Cluster on the Internet of Things Strategic Research and Innovation Agenda, and presents global views and state of the art results on the challenges facing the research, innovation, development and deployment of IoT in future years. The concept of IoT could disrupt consumer and industrial product markets generating new revenues and serving as a growth driver for semiconductor, networking equipment, and service provider end-markets globally. This will create new application and product end-markets, change the value chain of companies that creates the IoT technology and deploy it in various end sectors, while impacting the business models of semiconductor, software, device, communication and service provider stakeholders. The proliferation of intelligent devices at the edge of the network with the introduction of embedded software and app-driven hardware into manufactured devices, and the ability, through embedded software/hardware developments, to monetize those device functions and features by offering novel solutions, could generate completely new types of revenue streams. Intelligent and IoT devices leverage software, software licensing, entitlement management, and Internet connectivity in ways that address many of the societal challenges that we will face in the next decade

    Constraint-based graphical layout of multimodal presentations

    Get PDF
    When developing advanced multimodal interfaces, combining the characteristics of different modalities such as natural language, graphics, animation, virtual realities, etc., the question of automatically designing the graphical layout of such presentations in an appropriate format becomes increasingly important. So, to communicate information to the user in an expressive and effective way, a knowledge-based layout component has to be integrated into the architecture of an intelligent presentation system. In order to achieve a coherent output, it must be able to reflect certain semantic and pragmatic relations specified by a presentation planner to arrange the visual appearance of a mixture of textual and graphic fragments delivered by mode-specific generators. In this paper we will illustrate by the example of LayLab, the layout manager of the multimodal presentation system WIP, how the complex positioning problem for multimodal information can be treated as a constraint satisfaction problem. The design of an aesthetically pleasing layout is characterized as a combination of a general search problem in a finite discrete search space and an optimization problem. Therefore, we have integrated two dedicated constraint solvers, an incremental hierarchy solver and a finite domain solver, in a layered constraint solver model CLAY, which is triggered from a common metalevel by rules and defaults. The underlying constraint language is able to encode graphical design knowledge expressed by semantic/pragmatic, geometrical/topological, and temporal relations. Furthermore, this mechanism allows one to prioritize the constraints as well as to handle constraint solving over finite domains. As graphical constraints frequently have only local effects, they are incrementally generated by the system on the fly. Ultimately, we will illustrate the functionality of LayLab by some snapshots of an example run
    • 

    corecore