11 research outputs found

    Ray-on, an On-Site Photometric Augmented Reality Device

    Get PDF
    This paper describes the work done to improve the visitor’s experience while visiting the historical site of Cluny abbey, in Burgundy (France). The church of this abbey, founded in 910 and which was the biggest church of Christendom until the 15th century, has been almost completly destroyed after the French Revolution and so is hard to figure out for the visitors. The usage of virtual reality is one of the way to improve the understanding of the architecture of the site

    In-Situ Visualization for Cultural Heritage Sites using Novel Augmented Reality Technologies

    Full text link
    [ES] Mobile Augmented Reality is an ideal technology for presenting information in an attractive, comprehensive and personalized way to visitors of cultural heritage sites. One of the pioneer projects in this area was certainly the European project ArcheoGuide (IST-1999-11306) which developed and evaluated Augmented Reality (AR) at a very early stage. Many progresses have been done since then, and novel devices and algorithms offer novel possibilities and functionalities. In this paper we present current research work and discuss different approaches of Mobile AR for cultural heritage. Since this area is very large we focus on the visual aspects of such technologies, namely tracking and computer vision, as well as visualization.The work discussed in this article was supported by the European Union IST framework (IST 1999-11306) project ArcheoGuide and is continued in the current project iTACITUS (IST 2.5.10 – 034520).Stricker, D.; Pagani, A.; Zoellner, M. (2010). In-Situ Visualization for Cultural Heritage Sites using Novel Augmented Reality Technologies. Virtual Archaeology Review. 1(2):37-41. https://doi.org/10.4995/var.2010.4682OJS374112LEPETIT V., LAGGER P., FUA P.: Randomized trees for real-time keypoint recognition. Conference on Computer Vision and Pattern Recognition 2 (2005), 775-781. http://dx.doi.org/10.1109/cvpr.2005.288VLAHAKIS, Vassilios; IOaNNIDIS, Nikos; KARIGIANNIS, John; TSOTROS, Manolis; GOUNARIS, Michael; STRICKER, Didier;GLEUE, Tim; DAEHNE, Patrick; ALMEIDA, Luis Archeoguide: Challenges and Solutions of a Personalized Augmented Reality Guide for archaeological sites IEEE Computer Graphics and Applications 22 (2002), 5, pp. 52-60STRICKER, Didier Tracking with Reference Images: A Real-Time and Markerless Tracking Solution for Out-Door Augmented Reality Applications International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST), Glyfada, Greece, 2001, pp. 91-96.ZOELLNER, Michael, PAGANI Alain, STRICKER, Didier: "Reality filtering: A visual time machine in augmented reality", in VAST 2008. Proceedings (December).ITACITUS, 2008. "Intelligent tourism and cultural information through ubiquitous services." http://www.itacitus.org.FISCHER, J. 2005. Stylized augmented reality for improved immersion. Proceedings of IEEE Virtual Reality, 195-202. http://dx.doi.org/10.1109/vr.2005.1492774http://dx.doi.org/10.1109/vr.2005.7

    Trends and perspectives in augmented reality training

    Get PDF

    Ray-on, an On-Site Photometric Augmented Reality Device

    Get PDF
    This paper describes the work done to improve the visitor’s experience while visiting the historical site of Cluny abbey, in Burgundy (France). The church of this abbey, founded in 910 and which was the biggest church of Christendom until the 15th century, has been almost completly destroyed after the French Revolution and so is hard to figure out for the visitors. The usage of virtual reality is one of the way to improve the understanding of the architecture of the site

    Constructivist Digital Design Studio with Extended Reality for Effective Design Pedagogy

    Get PDF
    It is evident from previous research that learner preference, cognitive load and effective learning are interconnected. Designers’ individual characteristics and preferred modality of information delivery in design studio has direct relation to the effective use of the information delivered. This study evaluates and discusses possibilities of using XR (Extended Reality) technology within the framework of constructivist learning approach in the interior design studio by measuring its effectiveness as a pedagogical tool. The nature of the design studio and its pedagogy stayed nearly analogous throughout the past century (Bashier, 2014; Koch, 2006). The exponential advancement of information, communication technologies and generation Z’s assertiveness toward electronic ‘device’ oriented lifestyle are the two major challenges that today’s design studios yet to adopt for an effective design education. With an overview of contemporary design pedagogy and the potential use of XR for a constructivist learning environment; this study explores students’ learning styles and identifies how these learning preferences affect their learning outcome in traditional and Extended Reality based learning environment

    Designing and implementing interactive and realistic augmented reality experiences

    Get PDF
    In this paper, we propose an approach for supporting the design and implementation of interactive and realistic Augmented Reality (AR). Despite the advances in AR technology, most software applications still fail to support AR experiences where virtual objects appear as merged into the real setting. To alleviate this situation, we propose to combine the use of model-based AR techniques with the advantages of current game engines to develop AR scenes in which the virtual objects collide, are occluded, project shadows and, in general, are integrated into the augmented environment more realistically. To evaluate the feasibility of the proposed approach, we extended an existing game platform named GREP to enhance it with AR capacities. The realism of the AR experiences produced with the software was assessed in an event in which more than 100 people played two AR games simultaneously.This work is supported by the project CREAx and PACE funded by the Spanish Ministry of Economy, Industry and Competitiveness (TIN2014-56534-R and TIN2016-77690-R)

    Stylisation d'objets éclairés par des cartes d'environnement HDR

    Get PDF
    National audienceDans cet article, nous introduisons un pipeline de rendu permettant de styliser de manière interactive des objets éclairés par des cartes d'environnement à grande dynamique (High-Dynamic Range ou HDR). L'utilisation d'images HDR permet d'améliorer la qualité de certains traitements, comme les segmentations ou l'extraction des détails. De plus, cette architecture permet de combiner facilement des stylisations 2D (sur la carte d'environnement et sur les images) et 3D (sur les objets). Les nouveaux styles que nous présentons sont basés sur ce pipeline et illustrent la flexibilité de notre approche

    Content creation for seamless augmented experiences with projection mapping

    Get PDF
    This dissertation explores systems and methods for creating projection mapping content that seamlessly merges virtual and physical. Most virtual reality and augmented reality technologies rely on screens for display and interaction, where a mobile device or head mounted display mediates the user's experience. In contrast, projection mapping uses off-the-shelf video projectors to augment the appearance of physical objects, and with projection mapping there is no screen to mediate the experience. The physical world simply becomes the display. Projection mapping can provide users with a seamless augmented experience, where virtual and physical become indistinguishable in an apparently unmediated way. Projection mapping is an old concept dating to Disney's 1969 Haunted Mansion. The core technical foundations were laid back in 1999 with UNC's Office of the Future and Shader Lamps projects. Since then, projectors have gotten brighter, higher resolution, and drastically decreased in price. Yet projection mapping has not crossed the chasm into mainstream use. The largest remaining challenge for projection mapping is that content creation is very difficult and time consuming. Content for projection mapping is still created via a tedious manual process by warping a 2D video file onto a 3D physical object using existing tools (e.g. Adobe Photoshop) which are not made for defining animated interactive effects on 3D object surfaces. With existing tools, content must be created for each specific display object, and cannot be re-used across experiences. For each object the artist wants to animate, the artist must manually create a custom texture for that specific object, and warp the texture to the physical object. This limits projection mapped experiences to controlled environments and static scenes. If the artist wants to project onto a different object from the original, they must start from scratch creating custom content for that object. This manual content creation process is time consuming, expensive and doesn't scale. This thesis explores new methods for creating projection mapping content. Our goal is to make projection mapping easier, cheaper and more scalable. We explore methods for adaptive projection mapping, which enables artists to create content once, and that content adapts based on the color and geometry of the display surface. Content can be created once, and re-used on any surface. This thesis is composed of three proof-of-concept prototypes, exploring new methods for content creation for projection mapping. IllumiRoom expands video game content beyond the television screen and into the physical world using a standard video projector to surround a television with projected light. IllumiRoom works in any living room, the projected content dynamically adapts based on the color and geometry of the room. RoomAlive expands on this idea, using multiple projectors to cover an entire living room in input/output pixels and dynamically adapts gaming experiences to fill an entire room. Finally, Projectibles focuses on the physical aspect of projection mapping. Projectibles optimizes the display surface color to increase the contrast and resolution of the overall experience, enabling artists to design the physical object along with the virtual content. The proof-of-concept prototypes presented in this thesis are aimed at the not-to-distant future. The projects in this thesis are not theoretical concepts, but fully working prototype systems that demonstrate the practicality of projection mapping to create immersive experiences. It is the sincere hope of the author that these experiences quickly move of the lab and into the real world

    Colour videos with depth : acquisition, processing and evaluation

    Get PDF
    The human visual system lets us perceive the world around us in three dimensions by integrating evidence from depth cues into a coherent visual model of the world. The equivalent in computer vision and computer graphics are geometric models, which provide a wealth of information about represented objects, such as depth and surface normals. Videos do not contain this information, but only provide per-pixel colour information. In this dissertation, I hence investigate a combination of videos and geometric models: videos with per-pixel depth (also known as RGBZ videos). I consider the full life cycle of these videos: from their acquisition, via filtering and processing, to stereoscopic display. I propose two approaches to capture videos with depth. The first is a spatiotemporal stereo matching approach based on the dual-cross-bilateral grid – a novel real-time technique derived by accelerating a reformulation of an existing stereo matching approach. This is the basis for an extension which incorporates temporal evidence in real time, resulting in increased temporal coherence of disparity maps – particularly in the presence of image noise. The second acquisition approach is a sensor fusion system which combines data from a noisy, low-resolution time-of-flight camera and a high-resolution colour video camera into a coherent, noise-free video with depth. The system consists of a three-step pipeline that aligns the video streams, efficiently removes and fills invalid and noisy geometry, and finally uses a spatiotemporal filter to increase the spatial resolution of the depth data and strongly reduce depth measurement noise. I show that these videos with depth empower a range of video processing effects that are not achievable using colour video alone. These effects critically rely on the geometric information, like a proposed video relighting technique which requires high-quality surface normals to produce plausible results. In addition, I demonstrate enhanced non-photorealistic rendering techniques and the ability to synthesise stereoscopic videos, which allows these effects to be applied stereoscopically. These stereoscopic renderings inspired me to study stereoscopic viewing discomfort. The result of this is a surprisingly simple computational model that predicts the visual comfort of stereoscopic images. I validated this model using a perceptual study, which showed that it correlates strongly with human comfort ratings. This makes it ideal for automatic comfort assessment, without the need for costly and lengthy perceptual studies

    Creación de experiencias de realidad aumentada realistas por usuarios finales

    Get PDF
    Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Juan Manuel Dodero Beardo.- Secretario: Andrea Bellucci.- Vocal: Camino Fernández Llama
    corecore