710 research outputs found

    I-Light Symposium 2005 Proceedings

    Get PDF
    I-Light was made possible by a special appropriation by the State of Indiana. The research described at the I-Light Symposium has been supported by numerous grants from several sources. Any opinions, findings and conclusions, or recommendations expressed in the 2005 I-Light Symposium Proceedings are those of the researchers and authors and do not necessarily reflect the views of the granting agencies.Indiana University Office of the Vice President for Research and Information Technology, Purdue University Office of the Vice President for Information Technology and CI

    08231 Abstracts Collection -- Virtual Realities

    Get PDF
    From 1st to 6th June 2008, the Dagstuhl Seminar 08231 ``Virtual Realities\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. Virtual Reality (VR) is a multidisciplinary area of research aimed at interactive human-computer mediated simulations of artificial environments. Typical applications include simulation, training, scientific visualization, and entertainment. An important aspect of VR-based systems is the stimulation of the human senses -- typically sight, sound, and touch -- such that a user feels a sense of presence (or immersion) in the virtual environment. Different applications require different levels of presence, with corresponding levels of realism, sensory immersion, and spatiotemporal interactive fidelity. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. Links to extended abstracts or full papers are provided, if available

    U-DiVE: Design and evaluation of a distributed photorealistic virtual reality environment

    Get PDF
    This dissertation presents a framework that allows low-cost devices to visualize and interact with photorealistic scenes. To accomplish this task, the framework makes use of Unity’s high-definition rendering pipeline, which has a proprietary Ray Tracing algorithm, and Unity’s streaming package, which allows an application to be streamed within its editor. The framework allows the composition of a realistic scene using a Ray Tracing algorithm, and a virtual reality camera with barrel shaders, to correct the lens distortion needed for the use on an inexpensive cardboard. It also includes a method to collect the mobile device’s spatial orientation through a web browser to control the user’s view, delivered via WebRTC. The proposed framework can produce low-latency, realistic and immersive environments to be accessed through low-cost HMDs and mobile devices. To evaluate the structure, this work includes the verification of the frame rate achieved by the server and mobile device, which should be higher than 30 FPS for a smooth experience. In addition, it discusses whether the overall quality of experience is acceptable by evaluating the delay of image delivery from the server up to the mobile device, in face of user’s movement. Our tests showed that the framework reaches a mean latency around 177 (ms) with household Wi-Fi equipment and a maximum latency variation of 77.9 (ms), among the 8 scenes tested.Esta dissertação apresenta um framework que permite que dispositivos de baixo custo visualizem e interajam com cenas fotorrealísticas. Para realizar essa tarefa, o framework faz uso do pipeline de renderização de alta definição do Unity, que tem um algoritmo de rastreamento de raio proprietário, e o pacote de streaming do Unity, que permite o streaming de um aplicativo em seu editor. O framework permite a composição de uma cena realista usando um algoritmo de Ray Tracing, e uma câmera de realidade virtual com shaders de barril, para corrigir a distorção da lente necessária para usar um cardboard de baixo custo. Inclui também um método para coletar a orientação espacial do dispositivo móvel por meio de um navegador Web para controlar a visão do usuário, entregue via WebRTC. O framework proposto pode produzir ambientes de baixa latência, realistas e imersivos para serem acessados por meio de HMDs e dispositivos móveis de baixo custo. Para avaliar a estrutura, este trabalho considera a verificação da taxa de quadros alcançada pelo servidor e pelo dispositivo móvel, que deve ser superior a 30 FPS para uma experiência fluida. Além disso, discute se a qualidade geral da experiência é aceitável, ao avaliar o atraso da entrega das imagens desde o servidor até o dispositivo móvel, em face da movimentação do usuário. Nossos testes mostraram que o framework atinge uma latência média em torno dos 177 (ms) com equipamentos wi-fi de uso doméstico e uma variação máxima das latências igual a 77.9 (ms), entre as 8 cenas testadas

    Content creation for seamless augmented experiences with projection mapping

    Get PDF
    This dissertation explores systems and methods for creating projection mapping content that seamlessly merges virtual and physical. Most virtual reality and augmented reality technologies rely on screens for display and interaction, where a mobile device or head mounted display mediates the user's experience. In contrast, projection mapping uses off-the-shelf video projectors to augment the appearance of physical objects, and with projection mapping there is no screen to mediate the experience. The physical world simply becomes the display. Projection mapping can provide users with a seamless augmented experience, where virtual and physical become indistinguishable in an apparently unmediated way. Projection mapping is an old concept dating to Disney's 1969 Haunted Mansion. The core technical foundations were laid back in 1999 with UNC's Office of the Future and Shader Lamps projects. Since then, projectors have gotten brighter, higher resolution, and drastically decreased in price. Yet projection mapping has not crossed the chasm into mainstream use. The largest remaining challenge for projection mapping is that content creation is very difficult and time consuming. Content for projection mapping is still created via a tedious manual process by warping a 2D video file onto a 3D physical object using existing tools (e.g. Adobe Photoshop) which are not made for defining animated interactive effects on 3D object surfaces. With existing tools, content must be created for each specific display object, and cannot be re-used across experiences. For each object the artist wants to animate, the artist must manually create a custom texture for that specific object, and warp the texture to the physical object. This limits projection mapped experiences to controlled environments and static scenes. If the artist wants to project onto a different object from the original, they must start from scratch creating custom content for that object. This manual content creation process is time consuming, expensive and doesn't scale. This thesis explores new methods for creating projection mapping content. Our goal is to make projection mapping easier, cheaper and more scalable. We explore methods for adaptive projection mapping, which enables artists to create content once, and that content adapts based on the color and geometry of the display surface. Content can be created once, and re-used on any surface. This thesis is composed of three proof-of-concept prototypes, exploring new methods for content creation for projection mapping. IllumiRoom expands video game content beyond the television screen and into the physical world using a standard video projector to surround a television with projected light. IllumiRoom works in any living room, the projected content dynamically adapts based on the color and geometry of the room. RoomAlive expands on this idea, using multiple projectors to cover an entire living room in input/output pixels and dynamically adapts gaming experiences to fill an entire room. Finally, Projectibles focuses on the physical aspect of projection mapping. Projectibles optimizes the display surface color to increase the contrast and resolution of the overall experience, enabling artists to design the physical object along with the virtual content. The proof-of-concept prototypes presented in this thesis are aimed at the not-to-distant future. The projects in this thesis are not theoretical concepts, but fully working prototype systems that demonstrate the practicality of projection mapping to create immersive experiences. It is the sincere hope of the author that these experiences quickly move of the lab and into the real world

    Algorithms for a multi-projector CAVE system

    Get PDF
    With regards to facilitating development of VR applications, the main pur- pose of ALIVE is to reduce the amount of attention that the application developer has to dedicate to the issues that were described previously. In this project we aim to abstract the user from dealing with: Input devices. Display number and layout. De nition of the virtual cameras. Synchronization issues between cluster nodes. Notably missing from the list are 3D sound rendering and synchroniza- tion for non-deterministic algorithms. These problems are out of the scope of this project and will be addressed in the future. Summarizing the objectives of this project, we list: Provide an abstraction API, that facilitates development and deploy- ment of VR applications. Create a polygon renderer application based on the proposed API

    Optimization of Display-Wall Aware Applications on Cluster Based Systems

    Get PDF
    Actualment, els sistemes d'informació i comunicació que treballen amb grans volums de dades requereixen l'ús de plataformes que permetin una representació entenible des del punt de vista de l'usuari. En aquesta tesi s'analitzen les plataformes Cluster Display Wall, usades per a la visualització de dades massives, i es treballa concretament amb la plataforma Liquid Galaxy, desenvolupada per Google. Mitjançant la plataforma Liquid Galaxy, es realitza un estudi de rendiment d'aplicacions de visualització representatives, identificant els aspectes de rendiment més rellevants i els possibles colls d'ampolla. De forma específica, s'estudia amb major profunditat un cas representatiu d'aplicació de visualització, el Google Earth. El comportament del sistema executant Google Earth s'analitza mitjançant diferents tipus de test amb usuaris reals. Per a aquest fi, es defineix una nova mètrica de rendiment, basada en la ratio de visualització, i es valora la usabilitat del sistema mitjançant els atributs tradicionals d'efectivitat, eficiència i satisfacció. Adicionalment, el rendiment del sistema es modela analíticament i es prova la precisió del model comparant-ho amb resultats reals.Nowadays, information and communication systems that work with a high volume of data require infrastructures that allow an understandable representation of it from the user's point of view. This thesis analyzes the Cluster Display Wall platforms, used to visualized massive amounts of data, and specifically studies the Liquid Galaxy platform, developed by Google. Using the Liquid Galaxy platform, a performance study of representative visualization applications was performed, identifying the most relevant aspects of performance and possible bottlenecks. Specifically, we study in greater depth a representative case of visualization application, Google Earth. The system behavior while running Google Earth was analyzed through different kinds of tests with real users. For this, a new performance metric was defined, based on the visualization ratio, and the usability of the system was assessed through the traditional attributes of effectiveness, efficiency and satisfaction. Additionally, the system performance was analytically modeled and the accuracy of the model was tested by comparing it with actual results.Actualmente, los sistemas de información y comunicación que trabajan con grandes volúmenes de datos requieren el uso de plataformas que permitan una representación entendible desde el punto de vista del usuario. En esta tesis se analizan las plataformas Cluster Display Wall, usadas para la visualización de datos masivos, y se trabaja en concreto con la plataforma Liquid Galaxy, desarrollada por Google. Mediante la plataforma Liquid Galaxy, se realiza un estudio de rendimiento de aplicaciones de visualización representativas, identificando los aspectos de rendimiento más relevantes y los posibles cuellos de botella. De forma específica, se estudia en mayor profundidad un caso representativo de aplicación de visualización, el Google Earth. El comportamiento del sistema ejecutando Google Earth se analiza mediante diferentes tipos de test con usuarios reales. Para ello se define una nueva métrica de rendimiento, basada en el ratio de visualización, y se valora la usabilidad del sistema mediante los atributos tradicionales de efectividad, eficiencia y satisfacción. Adicionalmente, el rendimiento del sistema se modela analíticamente y se prueba la precisión del modelo comparándolo con resultados reales

    Design of a Scenario-Based Immersive Experience Room

    Get PDF
    open1noopenKlopfenstein, Cuno LorenzKlopfenstein, CUNO LOREN

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning
    • …
    corecore