7,318 research outputs found

    Serious Games in Cultural Heritage

    Get PDF
    Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented

    Developing serious games for cultural heritage: a state-of-the-art review

    Get PDF
    Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented

    Deep Neural Network and Data Augmentation Methodology for off-axis iris segmentation in wearable headsets

    Full text link
    A data augmentation methodology is presented and applied to generate a large dataset of off-axis iris regions and train a low-complexity deep neural network. Although of low complexity the resulting network achieves a high level of accuracy in iris region segmentation for challenging off-axis eye-patches. Interestingly, this network is also shown to achieve high levels of performance for regular, frontal, segmentation of iris regions, comparing favorably with state-of-the-art techniques of significantly higher complexity. Due to its lower complexity, this network is well suited for deployment in embedded applications such as augmented and mixed reality headsets

    Dynamic Illumination for Augmented Reality with Real-Time Interaction

    Get PDF
    Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost

    Assessing mobile mixed reality affordances as a comparative visualization pedagogy for design communication

    Get PDF
    Spatial visualisation skills and interpretation are critical in the design professions but are difficult for novice designers. There is growing evidence that mixed reality visualisation improves learner outcomes, but often these studies are focused on a single media representation and not on a comparison between media and the underpinning learning outcomes. Results from recent studies highlight the use of comparative visualisation pedagogy in design through learner reflective blogs and pilot studies with experts, but these studies are limited by expense and designs familiar to the learner. With increasing interest in mobile pedagogy, more assessment is required in understanding learner interpretation of comparative mobile mixed reality pedagogy. The aim of this study is to do this by evaluating insights from a first-year architectural design classroom through studying the impact and use of a range of mobile comparative visualisation technologies. Using a design-based research methodology and a usability framework for accessing comparative visualisation, this paper will study the complexities of spatial design in the built environment. Outcomes from the study highlight the positives of the approach but also the improvements required in the delivery of the visualisations to improve on the visibility and visual errors caused by the lack of mobile processing

    Real-Time Estimation of Illumination Direction for Augmented Reality with Low-Cost Sensors

    Get PDF
    In recent years, Augmented Reality has become a very popular topic, both as a research and commercial field. This trend has originated with the use of mobile devices as computational core and display. The appearance of virtual objects and their interaction with the real world is a key element in the success of an Augmented Reality software. A common issue in this type of software is the visual inconsistency between the virtual and real objects due to wrong illumination. Although illumination is a common research topic in Computer Graphics, few studies have been made about real time estimation of illumination direction. In this work we present a low-cost approach to detect the direction of the environment illumination, allowing the illumination of virtual objects according to the real light of the ambient, improving the integration of the scene. Our solution is open-source, based on Arduino hardware and the presented system was developed on Android.XIV Workshop Computación Gráfica, Imágenes y Visualización (WCGIV).Red de Universidades con Carreras en Informática (RedUNCI

    Real-Time Estimation of Illumination Direction for Augmented Reality with Low-Cost Sensors

    Get PDF
    In recent years, Augmented Reality has become a very popular topic, both as a research and commercial field. This trend has originated with the use of mobile devices as computational core and display. The appearance of virtual objects and their interaction with the real world is a key element in the success of an Augmented Reality software. A common issue in this type of software is the visual inconsistency between the virtual and real objects due to wrong illumination. Although illumination is a common research topic in Computer Graphics, few studies have been made about real time estimation of illumination direction. In this work we present a low-cost approach to detect the direction of the environment illumination, allowing the illumination of virtual objects according to the real light of the ambient, improving the integration of the scene. Our solution is open-source, based on Arduino hardware and the presented system was developed on Android.XIV Workshop Computación Gráfica, Imágenes y Visualización (WCGIV).Red de Universidades con Carreras en Informática (RedUNCI

    LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    Get PDF
    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems
    corecore