1,688 research outputs found

    Outdoor Dynamic 3-D Scene Reconstruction

    Get PDF
    Existing systems for 3D reconstruction from multiple view video use controlled indoor environments with uniform illumination and backgrounds to allow accurate segmentation of dynamic foreground objects. In this paper we present a portable system for 3D reconstruction of dynamic outdoor scenes which require relatively large capture volumes with complex backgrounds and non-uniform illumination. This is motivated by the demand for 3D reconstruction of natural outdoor scenes to support film and broadcast production. Limitations of existing multiple view 3D reconstruction techniques for use in outdoor scenes are identified. Outdoor 3D scene reconstruction is performed in three stages: (1) 3D background scene modelling using spherical stereo image capture; (2) multiple view segmentation of dynamic foreground objects by simultaneous video matting across multiple views; and (3) robust 3D foreground reconstruction and multiple view segmentation refinement in the presence of segmentation and calibration errors. Evaluation is performed on several outdoor productions with complex dynamic scenes including people and animals. Results demonstrate that the proposed approach overcomes limitations of previous indoor multiple view reconstruction approaches enabling high-quality free-viewpoint rendering and 3D reference models for production

    Enabling Neural Radiance Fields (NeRF) for Large-scale Aerial Images -- A Multi-tiling Approach and the Geometry Assessment of NeRF

    Full text link
    Neural Radiance Fields (NeRF) offer the potential to benefit 3D reconstruction tasks, including aerial photogrammetry. However, the scalability and accuracy of the inferred geometry are not well-documented for large-scale aerial assets,since such datasets usually result in very high memory consumption and slow convergence.. In this paper, we aim to scale the NeRF on large-scael aerial datasets and provide a thorough geometry assessment of NeRF. Specifically, we introduce a location-specific sampling technique as well as a multi-camera tiling (MCT) strategy to reduce memory consumption during image loading for RAM, representation training for GPU memory, and increase the convergence rate within tiles. MCT decomposes a large-frame image into multiple tiled images with different camera models, allowing these small-frame images to be fed into the training process as needed for specific locations without a loss of accuracy. We implement our method on a representative approach, Mip-NeRF, and compare its geometry performance with threephotgrammetric MVS pipelines on two typical aerial datasets against LiDAR reference data. Both qualitative and quantitative results suggest that the proposed NeRF approach produces better completeness and object details than traditional approaches, although as of now, it still falls short in terms of accuracy.Comment: 9 Figur

    Overview of Environment Perception for Intelligent Vehicles

    Get PDF
    This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area

    Real-time Illumination and Visual Coherence for Photorealistic Augmented/Mixed Reality

    Get PDF
    A realistically inserted virtual object in the real-time physical environment is a desirable feature in augmented reality (AR) applications and mixed reality (MR) in general. This problem is considered a vital research area in computer graphics, a field that is experiencing ongoing discovery. The algorithms and methods used to obtain dynamic and real-time illumination measurement, estimating, and rendering of augmented reality scenes are utilized in many applications to achieve a realistic perception by humans. We cannot deny the powerful impact of the continuous development of computer vision and machine learning techniques accompanied by the original computer graphics and image processing methods to provide a significant range of novel AR/MR techniques. These techniques include methods for light source acquisition through image-based lighting or sampling, registering and estimating the lighting conditions, and composition of global illumination. In this review, we discussed the pipeline stages with the details elaborated about the methods and techniques that contributed to the development of providing a photo-realistic rendering, visual coherence, and interactive real-time illumination results in AR/MR

    Augmented Reality Markerless Multi-Image Outdoor Tracking System for the Historical Buildings on Parliament Hill

    Get PDF
    [EN] Augmented Reality (AR) applications have experienced extraordinary growth recently, evolving into a well-established method for the dissemination and communication of content related to cultural heritage¿including education. AR applications have been used in museums and gallery exhibitions and virtual reconstructions of historic interiors. However, the circumstances of an outdoor environment can be problematic. This paper presents a methodology to develop immersive AR applications based on the recognition of outdoor buildings. To demonstrate this methodology, a case study focused on the Parliament Buildings National Historic Site in Ottawa, Canada has been conducted. The site is currently undergoing a multiyear rehabilitation program that will make access to parts of this national monument inaccessible to the public. AR experiences, including simulated photo merging of historic and present content, are proposed as one tool that can enrich the Parliament Hill visit during the rehabilitation. Outdoor AR experiences are limited by factors, such as variable lighting (and shadows) conditions, caused by changes in the environment (objects height and orientation, obstructions, occlusions), the weather, and the time of day. This paper proposes a workflow to solve some of these issues from a multi-image tracking approach.This work has been developed under the framework of the New Paradigms/New Tools for Heritage Conservation in Canada, a project funded through the Social Sciences and Humanities Research Council of Canada (SSHRC).Blanco-Pons, S.; Carrión-Ruiz, B.; Duong, M.; Chartrand, J.; Fai, S.; Lerma, JL. (2019). Augmented Reality Markerless Multi-Image Outdoor Tracking System for the Historical Buildings on Parliament Hill. Sustainability. 11(16):1-15. https://doi.org/10.3390/su11164268S1151116Bekele, M. K., Pierdicca, R., Frontoni, E., Malinverni, E. S., & Gain, J. (2018). A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage. Journal on Computing and Cultural Heritage, 11(2), 1-36. doi:10.1145/3145534Gimeno, J., Portalés, C., Coma, I., Fernández, M., & Martínez, B. (2017). Combining traditional and indirect augmented reality for indoor crowded environments. A case study on the Casa Batlló museum. Computers & Graphics, 69, 92-103. doi:10.1016/j.cag.2017.09.001Kolivand, H., El Rhalibi, A., Shahrizal Sunar, M., & Saba, T. (2018). ReVitAge: Realistic virtual heritage taking shadows and sky illumination into account. Journal of Cultural Heritage, 32, 166-175. doi:10.1016/j.culher.2018.01.020Amakawa, J., & Westin, J. (2017). New Philadelphia: using augmented reality to interpret slavery and reconstruction era historical sites. International Journal of Heritage Studies, 24(3), 315-331. doi:10.1080/13527258.2017.1378909Kim, J.-B., & Park, C. (2011). Development of Mobile AR Tour Application for the National Palace Museum of Korea. Lecture Notes in Computer Science, 55-60. doi:10.1007/978-3-642-22021-0_7Barrile, V., Fotia, A., Bilotta, G., & De Carlo, D. (2019). Integration of geomatics methodologies and creation of a cultural heritage app using augmented reality. Virtual Archaeology Review, 10(20), 40. doi:10.4995/var.2019.10361Analysis of Tracking Accuracy for Single-Camera Square-Marker-Based Tracking. In Third Workshop on Virtual and Augmented Reality of the GI-Fachgruppe VR/AR, Koblenz, Germany, 2006http://campar.in.tum.de/Chair/PublicationDetail?pub=pentenrieder2006giCirulis, A., & Brigmanis, K. B. (2013). 3D Outdoor Augmented Reality for Architecture and Urban Planning. Procedia Computer Science, 25, 71-79. doi:10.1016/j.procs.2013.11.009You, S., Neumann, U., & Azuma, R. (1999). Orientation tracking for outdoor augmented reality registration. IEEE Computer Graphics and Applications, 19(6), 36-42. doi:10.1109/38.799738Wither, J., Tsai, Y.-T., & Azuma, R. (2011). Indirect augmented reality. Computers & Graphics, 35(4), 810-822. doi:10.1016/j.cag.2011.04.010Radkowski, R., & Oliver, J. (2013). Natural Feature Tracking Augmented Reality for On-Site Assembly Assistance Systems. Lecture Notes in Computer Science, 281-290. doi:10.1007/978-3-642-39420-1_30Rao, J., Qiao, Y., Ren, F., Wang, J., & Du, Q. (2017). A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization. Sensors, 17(9), 1951. doi:10.3390/s17091951Hoppe, H., DeRose, T., Duchamp, T., McDonald, J., & Stuetzle, W. (1993). Mesh optimization. Proceedings of the 20th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’93. doi:10.1145/166117.166119Rossignac, J., & Borrel, P. (1993). Multi-resolution 3D approximations for rendering complex scenes. Modeling in Computer Graphics, 455-465. doi:10.1007/978-3-642-78114-8_29Gross, M. H., Staadt, O. G., & Gatti, R. (1996). Efficient triangular surface approximations using wavelets and quadtree data structures. IEEE Transactions on Visualization and Computer Graphics, 2(2), 130-143. doi:10.1109/2945.506225Botsch, M., Pauly, M., Rossl, C., Bischoff, S., & Kobbelt, L. (2006). Geometric modeling based on triangle meshes. ACM SIGGRAPH 2006 Courses on - SIGGRAPH ’06. doi:10.1145/1185657.1185839Pietroni, N., Tarini, M., & Cignoni, P. (2010). Almost Isometric Mesh Parameterization through Abstract Domains. IEEE Transactions on Visualization and Computer Graphics, 16(4), 621-635. doi:10.1109/tvcg.2009.96Khan, D., Yan, D.-M., Ding, F., Zhuang, Y., & Zhang, X. (2018). Surface remeshing with robust user-guided segmentation. Computational Visual Media, 4(2), 113-122. doi:10.1007/s41095-018-0107-yGuidi, G., Russo, M., Ercoli, S., Remondino, F., Rizzi, A., & Menna, F. (2009). A Multi-Resolution Methodology for the 3D Modeling of Large and Complex Archeological Areas. International Journal of Architectural Computing, 7(1), 39-55. doi:10.1260/147807709788549439Remondino, F., & El-Hakim, S. (2006). Image-based 3D Modelling: A Review. The Photogrammetric Record, 21(115), 269-291. doi:10.1111/j.1477-9730.2006.00383.xBruno, F., Bruno, S., De Sensi, G., Luchi, M.-L., Mancuso, S., & Muzzupappa, M. (2010). From 3D reconstruction to virtual reality: A complete methodology for digital archaeological exhibition. Journal of Cultural Heritage, 11(1), 42-49. doi:10.1016/j.culher.2009.02.006Unity, The Photogrammetry Workflowhttps://unity.com/solutions/photogrammetry.Blanco, S., Carrión, B., & Lerma, J. L. (2016). REVIEW OF AUGMENTED REALITY AND VIRTUAL REALITY TECHNIQUES IN ROCK ART. Proceedings of the ARQUEOLÓGICA 2.0 8th International Congress on Archaeology, Computer Graphics, Cultural Heritage and Innovation. doi:10.4995/arqueologica8.2016.3561Behzadan, A. H., & Kamat, V. R. (2010). Scalable Algorithm for Resolving Incorrect Occlusion in Dynamic Augmented Reality Engineering Environments. Computer-Aided Civil and Infrastructure Engineering, 25(1), 3-19. doi:10.1111/j.1467-8667.2009.00601.xTian, Y., Long, Y., Xia, D., Yao, H., & Zhang, J. (2015). Handling occlusions in augmented reality based on 3D reconstruction method. Neurocomputing, 156, 96-104. doi:10.1016/j.neucom.2014.12.081Tian, Y., Guan, T., & Wang, C. (2010). Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach. Sensors, 10(4), 2885-2900. doi:10.3390/s10040288

    Imaginary spaces

    Get PDF
    Current three dimensional computer graphics technology has given artists and computer generated images. Imaginary Spaces is a series of images which visually depict two unique and imaginative digitally produced environments. By utilizing modern computer graphics technology, these have been brought to life in stunning realism and detail. Imaginary Spaces consists of seven total images which showcase each environment from alternating vantage designers a new set of tools for producing amazingly life-like com artificial spaces points in virtual space
    • …
    corecore