6,553 research outputs found

    Serious Games in Cultural Heritage

    Get PDF
    Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented

    Developing serious games for cultural heritage: a state-of-the-art review

    Get PDF
    Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented

    Topomap: Topological Mapping and Navigation Based on Visual SLAM Maps

    Full text link
    Visual robot navigation within large-scale, semi-structured environments deals with various challenges such as computation intensive path planning algorithms or insufficient knowledge about traversable spaces. Moreover, many state-of-the-art navigation approaches only operate locally instead of gaining a more conceptual understanding of the planning objective. This limits the complexity of tasks a robot can accomplish and makes it harder to deal with uncertainties that are present in the context of real-time robotics applications. In this work, we present Topomap, a framework which simplifies the navigation task by providing a map to the robot which is tailored for path planning use. This novel approach transforms a sparse feature-based map from a visual Simultaneous Localization And Mapping (SLAM) system into a three-dimensional topological map. This is done in two steps. First, we extract occupancy information directly from the noisy sparse point cloud. Then, we create a set of convex free-space clusters, which are the vertices of the topological map. We show that this representation improves the efficiency of global planning, and we provide a complete derivation of our algorithm. Planning experiments on real world datasets demonstrate that we achieve similar performance as RRT* with significantly lower computation times and storage requirements. Finally, we test our algorithm on a mobile robotic platform to prove its advantages.Comment: 8 page

    The Effects of Object Shape, Fidelity, Color, and Luminance on Depth Perception in Handheld Mobile Augmented Reality

    Full text link
    Depth perception of objects can greatly affect a user's experience of an augmented reality (AR) application. Many AR applications require depth matching of real and virtual objects and have the possibility to be influenced by depth cues. Color and luminance are depth cues that have been traditionally studied in two-dimensional (2D) objects. However, there is little research investigating how the properties of three-dimensional (3D) virtual objects interact with color and luminance to affect depth perception, despite the substantial use of 3D objects in visual applications. In this paper, we present the results of a paired comparison experiment that investigates the effects of object shape, fidelity, color, and luminance on depth perception of 3D objects in handheld mobile AR. The results of our study indicate that bright colors are perceived as nearer than dark colors for a high-fidelity, simple 3D object, regardless of hue. Additionally, bright red is perceived as nearer than any other color. These effects were not observed for a low-fidelity version of the simple object or for a more-complex 3D object. High-fidelity objects had more perceptual differences than low-fidelity objects, indicating that fidelity interacts with color and luminance to affect depth perception. These findings reveal how the properties of 3D models influence the effects of color and luminance on depth perception in handheld mobile AR and can help developers select colors for their applications.Comment: 9 pages, In proceedings of IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 202

    U-DiVE: Design and evaluation of a distributed photorealistic virtual reality environment

    Get PDF
    This dissertation presents a framework that allows low-cost devices to visualize and interact with photorealistic scenes. To accomplish this task, the framework makes use of Unity’s high-definition rendering pipeline, which has a proprietary Ray Tracing algorithm, and Unity’s streaming package, which allows an application to be streamed within its editor. The framework allows the composition of a realistic scene using a Ray Tracing algorithm, and a virtual reality camera with barrel shaders, to correct the lens distortion needed for the use on an inexpensive cardboard. It also includes a method to collect the mobile device’s spatial orientation through a web browser to control the user’s view, delivered via WebRTC. The proposed framework can produce low-latency, realistic and immersive environments to be accessed through low-cost HMDs and mobile devices. To evaluate the structure, this work includes the verification of the frame rate achieved by the server and mobile device, which should be higher than 30 FPS for a smooth experience. In addition, it discusses whether the overall quality of experience is acceptable by evaluating the delay of image delivery from the server up to the mobile device, in face of user’s movement. Our tests showed that the framework reaches a mean latency around 177 (ms) with household Wi-Fi equipment and a maximum latency variation of 77.9 (ms), among the 8 scenes tested.Esta dissertação apresenta um framework que permite que dispositivos de baixo custo visualizem e interajam com cenas fotorrealísticas. Para realizar essa tarefa, o framework faz uso do pipeline de renderização de alta definição do Unity, que tem um algoritmo de rastreamento de raio proprietário, e o pacote de streaming do Unity, que permite o streaming de um aplicativo em seu editor. O framework permite a composição de uma cena realista usando um algoritmo de Ray Tracing, e uma câmera de realidade virtual com shaders de barril, para corrigir a distorção da lente necessária para usar um cardboard de baixo custo. Inclui também um método para coletar a orientação espacial do dispositivo móvel por meio de um navegador Web para controlar a visão do usuário, entregue via WebRTC. O framework proposto pode produzir ambientes de baixa latência, realistas e imersivos para serem acessados por meio de HMDs e dispositivos móveis de baixo custo. Para avaliar a estrutura, este trabalho considera a verificação da taxa de quadros alcançada pelo servidor e pelo dispositivo móvel, que deve ser superior a 30 FPS para uma experiência fluida. Além disso, discute se a qualidade geral da experiência é aceitável, ao avaliar o atraso da entrega das imagens desde o servidor até o dispositivo móvel, em face da movimentação do usuário. Nossos testes mostraram que o framework atinge uma latência média em torno dos 177 (ms) com equipamentos wi-fi de uso doméstico e uma variação máxima das latências igual a 77.9 (ms), entre as 8 cenas testadas

    Differentiable Radio Frequency Ray Tracing for Millimeter-Wave Sensing

    Full text link
    Millimeter wave (mmWave) sensing is an emerging technology with applications in 3D object characterization and environment mapping. However, realizing precise 3D reconstruction from sparse mmWave signals remains challenging. Existing methods rely on data-driven learning, constrained by dataset availability and difficulty in generalization. We propose DiffSBR, a differentiable framework for mmWave-based 3D reconstruction. DiffSBR incorporates a differentiable ray tracing engine to simulate radar point clouds from virtual 3D models. A gradient-based optimizer refines the model parameters to minimize the discrepancy between simulated and real point clouds. Experiments using various radar hardware validate DiffSBR's capability for fine-grained 3D reconstruction, even for novel objects unseen by the radar previously. By integrating physics-based simulation with gradient optimization, DiffSBR transcends the limitations of data-driven approaches and pioneers a new paradigm for mmWave sensing

    Deformable Beamsplitters: Enhancing Perception with Wide Field of View, Varifocal Augmented Reality Displays

    Get PDF
    An augmented reality head-mounted display with full environmental awareness could present data in new ways and provide a new type of experience, allowing seamless transitions between real life and virtual content. However, creating a light-weight, optical see-through display providing both focus support and wide field of view remains a challenge. This dissertation describes a new dynamic optical element, the deformable beamsplitter, and its applications for wide field of view, varifocal, augmented reality displays. Deformable beamsplitters combine a traditional deformable membrane mirror and a beamsplitter into a single element, allowing reflected light to be manipulated by the deforming membrane mirror, while transmitted light remains unchanged. This research enables both single element optical design and correct focus while maintaining a wide field of view, as demonstrated by the description and analysis of two prototype hardware display systems which incorporate deformable beamsplitters. As a user changes the depth of their gaze when looking through these displays, the focus of virtual content can quickly be altered to match the real world by simply modulating air pressure in a chamber behind the deformable beamsplitter; thus ameliorating vergence–accommodation conflict. Two user studies verify the display prototypes’ capabilities and show the potential of the display in enhancing human performance at quickly perceiving visual stimuli. This work shows that near-eye displays built with deformable beamsplitters allow for simple optical designs that enable wide field of view and comfortable viewing experiences with the potential to enhance user perception.Doctor of Philosoph

    08231 Abstracts Collection -- Virtual Realities

    Get PDF
    From 1st to 6th June 2008, the Dagstuhl Seminar 08231 ``Virtual Realities\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. Virtual Reality (VR) is a multidisciplinary area of research aimed at interactive human-computer mediated simulations of artificial environments. Typical applications include simulation, training, scientific visualization, and entertainment. An important aspect of VR-based systems is the stimulation of the human senses -- typically sight, sound, and touch -- such that a user feels a sense of presence (or immersion) in the virtual environment. Different applications require different levels of presence, with corresponding levels of realism, sensory immersion, and spatiotemporal interactive fidelity. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. Links to extended abstracts or full papers are provided, if available
    corecore