1,322 research outputs found

    Augmented Reality Markerless Multi-Image Outdoor Tracking System for the Historical Buildings on Parliament Hill

    Get PDF
    [EN] Augmented Reality (AR) applications have experienced extraordinary growth recently, evolving into a well-established method for the dissemination and communication of content related to cultural heritage¿including education. AR applications have been used in museums and gallery exhibitions and virtual reconstructions of historic interiors. However, the circumstances of an outdoor environment can be problematic. This paper presents a methodology to develop immersive AR applications based on the recognition of outdoor buildings. To demonstrate this methodology, a case study focused on the Parliament Buildings National Historic Site in Ottawa, Canada has been conducted. The site is currently undergoing a multiyear rehabilitation program that will make access to parts of this national monument inaccessible to the public. AR experiences, including simulated photo merging of historic and present content, are proposed as one tool that can enrich the Parliament Hill visit during the rehabilitation. Outdoor AR experiences are limited by factors, such as variable lighting (and shadows) conditions, caused by changes in the environment (objects height and orientation, obstructions, occlusions), the weather, and the time of day. This paper proposes a workflow to solve some of these issues from a multi-image tracking approach.This work has been developed under the framework of the New Paradigms/New Tools for Heritage Conservation in Canada, a project funded through the Social Sciences and Humanities Research Council of Canada (SSHRC).Blanco-Pons, S.; Carrión-Ruiz, B.; Duong, M.; Chartrand, J.; Fai, S.; Lerma, JL. (2019). Augmented Reality Markerless Multi-Image Outdoor Tracking System for the Historical Buildings on Parliament Hill. Sustainability. 11(16):1-15. https://doi.org/10.3390/su11164268S1151116Bekele, M. K., Pierdicca, R., Frontoni, E., Malinverni, E. S., & Gain, J. (2018). A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage. Journal on Computing and Cultural Heritage, 11(2), 1-36. doi:10.1145/3145534Gimeno, J., Portalés, C., Coma, I., Fernández, M., & Martínez, B. (2017). Combining traditional and indirect augmented reality for indoor crowded environments. A case study on the Casa Batlló museum. Computers & Graphics, 69, 92-103. doi:10.1016/j.cag.2017.09.001Kolivand, H., El Rhalibi, A., Shahrizal Sunar, M., & Saba, T. (2018). ReVitAge: Realistic virtual heritage taking shadows and sky illumination into account. Journal of Cultural Heritage, 32, 166-175. doi:10.1016/j.culher.2018.01.020Amakawa, J., & Westin, J. (2017). New Philadelphia: using augmented reality to interpret slavery and reconstruction era historical sites. International Journal of Heritage Studies, 24(3), 315-331. doi:10.1080/13527258.2017.1378909Kim, J.-B., & Park, C. (2011). Development of Mobile AR Tour Application for the National Palace Museum of Korea. Lecture Notes in Computer Science, 55-60. doi:10.1007/978-3-642-22021-0_7Barrile, V., Fotia, A., Bilotta, G., & De Carlo, D. (2019). Integration of geomatics methodologies and creation of a cultural heritage app using augmented reality. Virtual Archaeology Review, 10(20), 40. doi:10.4995/var.2019.10361Analysis of Tracking Accuracy for Single-Camera Square-Marker-Based Tracking. In Third Workshop on Virtual and Augmented Reality of the GI-Fachgruppe VR/AR, Koblenz, Germany, 2006http://campar.in.tum.de/Chair/PublicationDetail?pub=pentenrieder2006giCirulis, A., & Brigmanis, K. B. (2013). 3D Outdoor Augmented Reality for Architecture and Urban Planning. Procedia Computer Science, 25, 71-79. doi:10.1016/j.procs.2013.11.009You, S., Neumann, U., & Azuma, R. (1999). Orientation tracking for outdoor augmented reality registration. IEEE Computer Graphics and Applications, 19(6), 36-42. doi:10.1109/38.799738Wither, J., Tsai, Y.-T., & Azuma, R. (2011). Indirect augmented reality. Computers & Graphics, 35(4), 810-822. doi:10.1016/j.cag.2011.04.010Radkowski, R., & Oliver, J. (2013). Natural Feature Tracking Augmented Reality for On-Site Assembly Assistance Systems. Lecture Notes in Computer Science, 281-290. doi:10.1007/978-3-642-39420-1_30Rao, J., Qiao, Y., Ren, F., Wang, J., & Du, Q. (2017). A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization. Sensors, 17(9), 1951. doi:10.3390/s17091951Hoppe, H., DeRose, T., Duchamp, T., McDonald, J., & Stuetzle, W. (1993). Mesh optimization. Proceedings of the 20th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’93. doi:10.1145/166117.166119Rossignac, J., & Borrel, P. (1993). Multi-resolution 3D approximations for rendering complex scenes. Modeling in Computer Graphics, 455-465. doi:10.1007/978-3-642-78114-8_29Gross, M. H., Staadt, O. G., & Gatti, R. (1996). Efficient triangular surface approximations using wavelets and quadtree data structures. IEEE Transactions on Visualization and Computer Graphics, 2(2), 130-143. doi:10.1109/2945.506225Botsch, M., Pauly, M., Rossl, C., Bischoff, S., & Kobbelt, L. (2006). Geometric modeling based on triangle meshes. ACM SIGGRAPH 2006 Courses on - SIGGRAPH ’06. doi:10.1145/1185657.1185839Pietroni, N., Tarini, M., & Cignoni, P. (2010). Almost Isometric Mesh Parameterization through Abstract Domains. IEEE Transactions on Visualization and Computer Graphics, 16(4), 621-635. doi:10.1109/tvcg.2009.96Khan, D., Yan, D.-M., Ding, F., Zhuang, Y., & Zhang, X. (2018). Surface remeshing with robust user-guided segmentation. Computational Visual Media, 4(2), 113-122. doi:10.1007/s41095-018-0107-yGuidi, G., Russo, M., Ercoli, S., Remondino, F., Rizzi, A., & Menna, F. (2009). A Multi-Resolution Methodology for the 3D Modeling of Large and Complex Archeological Areas. International Journal of Architectural Computing, 7(1), 39-55. doi:10.1260/147807709788549439Remondino, F., & El-Hakim, S. (2006). Image-based 3D Modelling: A Review. The Photogrammetric Record, 21(115), 269-291. doi:10.1111/j.1477-9730.2006.00383.xBruno, F., Bruno, S., De Sensi, G., Luchi, M.-L., Mancuso, S., & Muzzupappa, M. (2010). From 3D reconstruction to virtual reality: A complete methodology for digital archaeological exhibition. Journal of Cultural Heritage, 11(1), 42-49. doi:10.1016/j.culher.2009.02.006Unity, The Photogrammetry Workflowhttps://unity.com/solutions/photogrammetry.Blanco, S., Carrión, B., & Lerma, J. L. (2016). REVIEW OF AUGMENTED REALITY AND VIRTUAL REALITY TECHNIQUES IN ROCK ART. Proceedings of the ARQUEOLÓGICA 2.0 8th International Congress on Archaeology, Computer Graphics, Cultural Heritage and Innovation. doi:10.4995/arqueologica8.2016.3561Behzadan, A. H., & Kamat, V. R. (2010). Scalable Algorithm for Resolving Incorrect Occlusion in Dynamic Augmented Reality Engineering Environments. Computer-Aided Civil and Infrastructure Engineering, 25(1), 3-19. doi:10.1111/j.1467-8667.2009.00601.xTian, Y., Long, Y., Xia, D., Yao, H., & Zhang, J. (2015). Handling occlusions in augmented reality based on 3D reconstruction method. Neurocomputing, 156, 96-104. doi:10.1016/j.neucom.2014.12.081Tian, Y., Guan, T., & Wang, C. (2010). Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach. Sensors, 10(4), 2885-2900. doi:10.3390/s10040288

    Dynamic Illumination for Augmented Reality with Real-Time Interaction

    Get PDF
    Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost

    HOLOGRAPHICS: Combining Holograms with Interactive Computer Graphics

    Get PDF
    Among all imaging techniques that have been invented throughout the last decades, computer graphics is one of the most successful tools today. Many areas in science, entertainment, education, and engineering would be unimaginable without the aid of 2D or 3D computer graphics. The reason for this success story might be its interactivity, which is an important property that is still not provided efficiently by competing technologies – such as holography. While optical holography and digital holography are limited to presenting a non-interactive content, electroholography or computer generated holograms (CGH) facilitate the computer-based generation and display of holograms at interactive rates [2,3,29,30]. Holographic fringes can be computed by either rendering multiple perspective images, then combining them into a stereogram [4], or simulating the optical interference and calculating the interference pattern [5]. Once computed, such a system dynamically visualizes the fringes with a holographic display. Since creating an electrohologram requires processing, transmitting, and storing a massive amount of data, today’s computer technology still sets the limits for electroholography. To overcome some of these performance issues, advanced reduction and compression methods have been developed that create truly interactive electroholograms. Unfortunately, most of these holograms are relatively small, low resolution, and cover only a small color spectrum. However, recent advances in consumer graphics hardware may reveal potential acceleration possibilities that can overcome these limitations [6]. In parallel to the development of computer graphics and despite their non-interactivity, optical and digital holography have created new fields, including interferometry, copy protection, data storage, holographic optical elements, and display holograms. Especially display holography has conquered several application domains. Museum exhibits often use optical holograms because they can present 3D objects with almost no loss in visual quality. In contrast to most stereoscopic or autostereoscopic graphics displays, holographic images can provide all depth cues—perspective, binocular disparity, motion parallax, convergence, and accommodation—and theoretically can be viewed simultaneously from an unlimited number of positions. Displaying artifacts virtually removes the need to build physical replicas of the original objects. In addition, optical holograms can be used to make engineering, medical, dental, archaeological, and other recordings—for teaching, training, experimentation and documentation. Archaeologists, for example, use optical holograms to archive and investigate ancient artifacts [7,8]. Scientists can use hologram copies to perform their research without having access to the original artifacts or settling for inaccurate replicas. Optical holograms can store a massive amount of information on a thin holographic emulsion. This technology can record and reconstruct a 3D scene with almost no loss in quality. Natural color holographic silver halide emulsion with grain sizes of 8nm is today’s state-of-the-art [14]. Today, computer graphics and raster displays offer a megapixel resolution and the interactive rendering of megabytes of data. Optical holograms, however, provide a terapixel resolution and are able to present an information content in the range of terabytes in real-time. Both are dimensions that will not be reached by computer graphics and conventional displays within the next years – even if Moore’s law proves to hold in future. Obviously, one has to make a decision between interactivity and quality when choosing a display technology for a particular application. While some applications require high visual realism and real-time presentation (that cannot be provided by computer graphics), others depend on user interaction (which is not possible with optical and digital holograms). Consequently, holography and computer graphics are being used as tools to solve individual research, engineering, and presentation problems within several domains. Up until today, however, these tools have been applied separately. The intention of the project which is summarized in this chapter is to combine both technologies to create a powerful tool for science, industry and education. This has been referred to as HoloGraphics. Several possibilities have been investigated that allow merging computer generated graphics and holograms [1]. The goal is to combine the advantages of conventional holograms (i.e. extremely high visual quality and realism, support for all depth queues and for multiple observers at no computational cost, space efficiency, etc.) with the advantages of today’s computer graphics capabilities (i.e. interactivity, real-time rendering, simulation and animation, stereoscopic and autostereoscopic presentation, etc.). The results of these investigations are presented in this chapter

    Augmented Reality in Industry 4.0 and Future Innovation Programs

    Get PDF
    open4noAugmented Reality (AR) is worldwide recognized as one of the leading technologies of the 21st century and one of the pillars of the new industrial revolution envisaged by the Industry 4.0 international program. Several papers describe, in detail, specific applications of Augmented Reality developed to test its potentiality in a variety of fields. However, there is a lack of sources detailing the current limits of this technology in the event of its introduction in a real working environment where everyday tasks could be carried out by operators using an AR-based approach. A literature analysis to detect AR strength and weakness has been carried out, and a set of case studies has been implemented by authors to find the limits of current AR technologies in industrial applications outside the laboratory-protected environment. The outcome of this paper is that, even though Augmented Reality is a well-consolidated computer graphic technique in research applications, several improvements both from a software and hardware point of view should be introduced before its introduction in industrial operations. The originality of this paper lies in the detection of guidelines to improve the Augmented Reality potentialities in factories and industries.openSanti, GM; Ceruti, A; Liverani, A; Osti, FSanti, GM; Ceruti, A; Liverani, A; Osti,

    Spatial integration in computer-augmented realities

    Get PDF
    In contrast to virtual reality, which immerses the user in a wholly computergenerated perceptual environment, augmented reality systems superimpose virtual entities on the user's view of the real world. This concept promises to fulfil new applications in a wide range of fields, but there are some challenging issues to be resolved. One issue relates to achieving accurate registration of virtual and real worlds. Accurate spatial registration is not only required with respect to lateral positioning, but also in depth. A limiting problem with existing optical-see-through displays, typically used for augmenting reality, is that they are incapable of displaying a full range of depth cues. Most significantly, they are unable to occlude real background and hence cannot produce interposition depth cueing. Neither are they able to modify the real-world view in the ways required to produce convincing common illumination effects such as virtual shadows across real surfaces. Also, at present, there are no wholly satisfactory ways of determining suitable common illumination models with which to determine the real-virtual light interactions necessary for producing such depth cues. This thesis establishes that interpositioning is essential for appropriate estimation of depth in augmented realities, and that the presence of shadows provides an important refining cue. It also extends the concept of a transparency alpha-channel to allow optical-see-through systems to display appropriate depth cues. The generalised theory of the approach is described mathematically and algorithms developed to automate generation of display-surface images. Three practical physical display strategies are presented; using a transmissive mask, selective lighting using digital projection, and selective reflection using digital micromirror devices. With respect to obtaining a common illumination model, all current approaches require either . prior knowledge of the light sources illuminating the real scene, or involve inserting some kind of probe into the scene with which to determine real light source position, shape, and intensity. This thesis presents an alternative approach that infers a plausible illumination from a limited view of the scene.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Optimization of Computer generated holography rendering and optical design for a compact and large eyebox Augmented Reality glass

    Get PDF
    Thesis (Master of Science in Informatics)--University of Tsukuba, no. 41288, 2019.3.2

    A comprehensive taxonomy for three-dimensional displays

    Get PDF
    Even though three-dimensional (3D) displays have been introduced in relatively recent times in the context of display technology, they have undergone a rapid evolution, to the point that a plethora of equipment able to reproduce dynamic three-dimensional scenes in real time is now becoming commonplace in the consumer market. This paper’s main contributions are (1) a clear definition of a 3D display, based on the visual depth cues supported, and (2) a hierarchical taxonomy of classes and subclasses of 3D displays, based on a set of properties that allows an unambiguous and systematic classification scheme for three-dimensional displays. Five main types of 3D displays are thus defined –two of those new–, aiming to provide a taxonomy that is largely backwards-compatible, but that also clarifies prior inconsistencies in the literature. This well-defined outline should also enable exploration of the 3D display space and devising of new 3D display systems.Fundação para a Ciência e Tecnologi

    On Inter-referential Awareness in Collaborative Augmented Reality

    Get PDF
    For successful collaboration to occur, a workspace must support inter-referential awareness - or the ability for one participant to refer to a set of artifacts in the environment, and for that reference to be correctly interpreted by others. While referring to objects in our everyday environment is a straight-forward task, the non-tangible nature of digital artifacts presents us with new interaction challenges. Augmented reality (AR) is inextricably linked to the physical world, and it is natural to believe that the re-integration of physical artifacts into the workspace makes referencing tasks easier; however, we find that these environments combine the referencing challenges from several computing disciplines, which compound across scenarios. This dissertation presents our studies of this form of awareness in collaborative AR environments. It stems from our research in developing mixed reality environments for molecular modeling, where we explored spatial and multi-modal referencing techniques. To encapsulate the myriad of factors found in collaborative AR, we present a generic, theoretical framework and apply it to analyze this domain. Because referencing is a very human-centric activity, we present the results of an exploratory study which examines the behaviors of participants and how they generate references to physical and virtual content in co-located and remote scenarios; we found that participants refer to content using physical and virtual techniques, and that shared video is highly effective in disambiguating references in remote environments. By implementing user feedback from this study, a follow-up study explores how the environment can passively support referencing, where we discovered the role that virtual referencing plays during collaboration. A third study was conducted in order to better understand the effectiveness of giving and interpreting references using a virtual pointer; the results suggest the need for participants to be parallel with the arrow vector (strengthening the argument for shared viewpoints), as well as the importance of shadows in non-stereoscopic environments. Our contributions include a framework for analyzing the domain of inter-referential awareness, the development of novel referencing techniques, the presentation and analysis of our findings from multiple user studies, and a set of guidelines to help designers support this form of awareness
    • …
    corecore