450 research outputs found

    Formation of Morphable 3D­model of Large Scale Natural Sites by Using Image Based Modeling and Rendering Techniques

    Get PDF
    No global 3D model of the environment needs to be assembled, a process which can be extremely cumbersome and error prone for large scale scenes e.g. the global registration of multiple local models can accumulate a great amount of error, while it also presumes a very accurate extraction of the underlying geometry. On the contrary, neither any such accurate geometric reconstruction of the individual local 3D models nor a very precise registration between them is required by our framework in order that it can produce satisfactory results. This paper presents an application of LP based MRF optimization techniques and also we have turned our attention to a different re­ search topic: the proposal of novel image based modeling and rendering methods, which are capable of automatically reproducing faithful (i.e. photorealistic) digital copies of complex 3D virtual environments, while also allowing the virtual exploration of these environments at interactive frame rates

    A multi-camera approach to image-based rendering and 3-D/Multiview display of ancient chinese artifacts

    Get PDF
    published_or_final_versio

    Flexible Stereoscopic 3D Content Creation of Real World Scenes

    Get PDF
    We propose an alternative over current approaches to stereoscopic 3D video content creation based on a free-viewpoint video. Acquisition and editing is greatly simplified. Our method is suitable for arbitrary real-world scenes. From unsynchronized multi-view video footage, our approach renders high-quality stereo sequences without the need to explicitly reconstruct any scene depth or geometry. By allowing to freely edit viewpoint, slow motion, freeze-rotate shots, depth-of-field, and many more effects, the presented approach extends the possibilities in stereo 3D movie creation.In diesem Report schlagen wir eine Alternative zu gegenwärtig in der Produktion von stereoskopischen Filmen verwendeten Techniken vor. Unser Ansatz basiert auf der Verwendung eines Systems zur Blickpunktnavigation. Die Aufnahme und die Editierung der Stereodaten wird dadurch erheblich vereinfacht. Unsere Methode generiert qualitativ hochwertige Stereosequenzen ohne dabei Szenengeometrie oder Szenetiefe explizit zu rekonstruieren. Das Verfahren ermöglicht es den Blickpunkt und die Wiedergabegeschwindigkeit zu ändern und visuelle Effekte zu integrieren, wodurch neue künstlerische Möglichkeiten in der stereoskopischen Filmproduktion erschlossen werden

    Design of Participatory Virtual Reality System for visualizing an intelligent adaptive cyberspace

    Get PDF
    The concept of 'Virtual Intelligence' is proposed as an intelligent adaptive interaction between the simulated 3-D dynamic environment and the 3-D dynamic virtual image of the participant in the cyberspace created by a virtual reality system. A system design for such interaction is realised utilising only a stereoscopic optical head-mounted LCD display with an ultrasonic head tracker, a pair of gesture-controlled fibre optic gloves and, a speech recogni(ion and synthesiser device, which are all connected to a Pentium computer. A 3-D dynamic environment is created by physically-based modelling and rendering in real-time and modification of existing object description files by afractals-based Morph software. It is supported by an extensive library of audio and video functions, and functions characterising the dynamics of various objects. The multimedia database files so created are retrieved or manipulated by intelligent hypermedia navigation and intelligent integration with existing information. Speech commands control the dynamics of the environment and the corresponding multimedia databases. The concept of a virtual camera developed by ZeIter as well as Thalmann and Thalmann, as automated by Noma and Okada, can be applied for dynamically relating the orientation and actions of the virtual image of the participant with respect to the simulated environment. Utilising the fibre optic gloves, gesture-based commands are given by the participant for controlling his 3-D virtual image using a gesture language. Optimal estimation methods and dataflow techniques enable synchronisation between the commands of the participant expressed through the gesture language and his 3-D dynamic virtual image. Utilising a framework, developed earlier by the author, for adaptive computational control of distribute multimedia systems, the data access required for the environment as well as the virtual image of the participant can be endowed with adaptive capability

    The development of a hybrid virtual reality/video view-morphing display system for teleoperation and teleconferencing

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2000.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 84-89).The goal of this study is to extend the desktop panoramic static image viewer concept (e.g., Apple QuickTime VR; IPIX) to support immersive real time viewing, so that an observer wearing a head-mounted display can make free head movements while viewing dynamic scenes rendered in real time stereo using video data obtained from a set of fixed cameras. Computational experiments by Seitz and others have demonstrated the feasibility of morphing image pairs to render stereo scenes from novel, virtual viewpoints. The user can interact both with morphed real world video images, and supplementary artificial virtual objects (“Augmented Reality”). The inherent congruence of the real and artificial coordinate frames of this system reduces registration errors commonly found in Augmented Reality applications. In addition, the user’s eyepoint is computed locally so that any scene lag resulting from head movement will be less than those from alternative technologies using remotely controlled ground cameras. For space applications, this can significantly reduce the apparent lag due to satellite communication delay. This hybrid VR/view-morphing display (“Virtual Video”) has many important NASA applications including remote teleoperation, crew onboard training, private family and medical teleconferencing, and telemedicine. The technical objective of this study developed a proof-of-concept system using a 3D graphics PC workstation of one of the component technologies, Immersive Omnidirectional Video, of Virtual Video. The management goal identified a system process for planning, managing, and tracking the integration, test and validation of this phased, 3-year multi-university research and development program.by William E. Hutchison.S.M

    Data augmentation for NeRF: a geometric consistent solution based on view morphing

    Full text link
    NeRF aims to learn a continuous neural scene representation by using a finite set of input images taken from different viewpoints. The fewer the number of viewpoints, the higher the likelihood of overfitting on them. This paper mitigates such limitation by presenting a novel data augmentation approach to generate geometrically consistent image transitions between viewpoints using view morphing. View morphing is a highly versatile technique that does not requires any prior knowledge about the 3D scene because it is based on general principles of projective geometry. A key novelty of our method is to use the very same depths predicted by NeRF to generate the image transitions that are then added to NeRF training. We experimentally show that this procedure enables NeRF to improve the quality of its synthesised novel views in the case of datasets with few training viewpoints. We improve PSNR up to 1.8dB and 10.5dB when eight and four views are used for training, respectively. To the best of our knowledge, this is the first data augmentation strategy for NeRF that explicitly synthesises additional new input images to improve the model generalisation
    • …
    corecore