33 research outputs found

    Change detection in combination with spatial models and its effectiveness on underwater scenarios

    Get PDF
    This thesis proposes a novel change detection approach for underwater scenarios and combines it with different especially developed spatial models, this allows accurate and spatially coherent detection of any moving objects with a static camera in arbitrary environments. To deal with the special problems of underwater imaging pre-segmentations based on the optical flow and other special adaptions were added to the change detection algorithm so that it can better handle typical underwater scenarios like a scene crowded by a whole fish swarm

    Actors in VR storytelling

    Full text link
    Virtual Reality (VR) storytelling enhances the immersion of users into virtual environments (VE). Its use in virtual cultural heritage presentations helps the revival of the genius loci (the spirit of the place) of cultural monuments. This paper aims to show that the use of actors in VR storytelling adds to the quality of user experience and improves the edutainment value of virtual cultural heritage applications. We will describe the Baiae dry visit application which takes us to a time travel in the city considered by the Roman elite as "Little Rome (Pusilla Roma)" and presently is only partially preserved under the sea.Comment: Pre-print versio

    Extraction and Integration of Physical Illumination in Dynamic Augmented Reality Environments

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Although current augmented, virtual, and mixed reality (AR/VR/MR) systems are facing advanced and immersive experience in the entertainment industry with countless media forms. Theses systems suffer a lack of correct direct and indirect illumination modeling where the virtual objects render with the same lighting condition as the real environment. Some systems are using baked GI, pre-recorded textures, and light probes that are mostly accomplished offline to compensate for precomputed real-time global illumination (GI). Thus, illumination information can be extracted from the physical scene for interactively rendering the virtual objects into the real world which produces a more realistic final scene in real-time. This work approaches the problem of visual coherence in AR by proposing a system that detects the real-world lighting conditions in dynamic scenes, then uses the extracted illumination information to render the objects added to the scene. The system covers several major components to achieve a more realistic augmented reality outcome. First, the detection of the incident light (direct illumination) from the physical scene with the use of computer vision techniques based on the topological structural analysis of 2D images using a live-feed 360-degree camera instrumented on an AR device that captures the entire radiance map. Also, the physics-based light polarization eliminates or reduces false-positive lights such as white surfaces, reflections, or glare which negatively affect the light detection process. Second, the simulation of the reflected light (indirect illumination) that bounce between the real-world surfaces to be rendered into the virtual objects and reflect their existence in the virtual world. Third, defining the shading characteristic/properties of the virtual object to depict the correct lighting assets with a suitable shadow casting. Fourth, the geometric properties of real-scene including plane detection, 3D surface reconstruction, and simple meshing are incorporated with the virtual scene for more realistic depth interactions between the real and virtual objects. These components are developed methods which assumed to be working simultaneously in real-time for photo-realistic AR. The system is tested with several lighting conditions to evaluate the accuracy of the results based on the error incurred between the real/virtual objects casting shadow and interactions. For system efficiency, the rendering time is compared with previous works and research. Further evaluation of human perception is conducted through a user study. The overall performance of the system is investigated to reduce the cost to a minimum

    3D Recording and Interpretation for Maritime Archaeology

    Get PDF
    This open access peer-reviewed volume was inspired by the UNESCO UNITWIN Network for Underwater Archaeology International Workshop held at Flinders University, Adelaide, Australia in November 2016. Content is based on, but not limited to, the work presented at the workshop which was dedicated to 3D recording and interpretation for maritime archaeology. The volume consists of contributions from leading international experts as well as up-and-coming early career researchers from around the globe. The content of the book includes recording and analysis of maritime archaeology through emerging technologies, including both practical and theoretical contributions. Topics include photogrammetric recording, laser scanning, marine geophysical 3D survey techniques, virtual reality, 3D modelling and reconstruction, data integration and Geographic Information Systems. The principal incentive for this publication is the ongoing rapid shift in the methodologies of maritime archaeology within recent years and a marked increase in the use of 3D and digital approaches. This convergence of digital technologies such as underwater photography and photogrammetry, 3D sonar, 3D virtual reality, and 3D printing has highlighted a pressing need for these new methodologies to be considered together, both in terms of defining the state-of-the-art and for consideration of future directions. As a scholarly publication, the audience for the book includes students and researchers, as well as professionals working in various aspects of archaeology, heritage management, education, museums, and public policy. It will be of special interest to those working in the field of coastal cultural resource management and underwater archaeology but will also be of broader interest to anyone interested in archaeology and to those in other disciplines who are now engaging with 3D recording and visualization

    Reconstruction and rendering of time-varying natural phenomena

    Get PDF
    While computer performance increases and computer generated images get ever more realistic, the need for modeling computer graphics content is becoming stronger. To achieve photo-realism detailed scenes have to be modeled often with a significant amount of manual labour. Interdisciplinary research combining the fields of Computer Graphics, Computer Vision and Scientific Computing has led to the development of (semi-)automatic modeling tools freeing the user of labour-intensive modeling tasks. The modeling of animated content is especially challenging. Realistic motion is necessary to convince the audience of computer games, movies with mixed reality content and augmented reality applications. The goal of this thesis is to investigate automated modeling techniques for time-varying natural phenomena. The results of the presented methods are animated, three-dimensional computer models of fire, smoke and fluid flows.Durch die steigende Rechenkapazität moderner Computer besteht die Möglichkeit immer realistischere Bilder virtuell zu erzeugen. Dadurch entsteht ein größerer Bedarf an Modellierungsarbeit um die nötigen Objekte virtuell zu beschreiben. Um photorealistische Bilder erzeugen zu können müssen sehr detaillierte Szenen, oft in mühsamer Handarbeit, modelliert werden. Ein interdisziplinärer Forschungszweig, der Computergrafik, Bildverarbeitung und Wissenschaftliches Rechnen verbindet, hat in den letzten Jahren die Entwicklung von (semi-)automatischen Methoden zur Modellierung von Computergrafikinhalten vorangetrieben. Die Modellierung dynamischer Inhalte ist dabei eine besonders anspruchsvolle Aufgabe, da realistische Bewegungsabläufe sehr wichtig für eine überzeugende Darstellung von Computergrafikinhalten in Filmen, Computerspielen oder Augmented-Reality Anwendungen sind. Das Ziel dieser Arbeit ist es automatische Modellierungsmethoden für dynamische Naturerscheinungen wie Wasserfluss, Feuer, Rauch und die Bewegung erhitzter Luft zu entwickeln. Das Resultat der entwickelten Methoden sind dabei dynamische, dreidimensionale Computergrafikmodelle

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Computational Light Transport for Forward and Inverse Problems.

    Get PDF
    El transporte de luz computacional comprende todas las técnicas usadas para calcular el flujo de luz en una escena virtual. Su uso es ubicuo en distintas aplicaciones, desde entretenimiento y publicidad, hasta diseño de producto, ingeniería y arquitectura, incluyendo el generar datos validados para técnicas basadas en imagen por ordenador. Sin embargo, simular el transporte de luz de manera precisa es un proceso costoso. Como consecuencia, hay que establecer un balance entre la fidelidad de la simulación física y su coste computacional. Por ejemplo, es común asumir óptica geométrica o una velocidad de propagación de la luz infinita, o simplificar los modelos de reflectancia ignorando ciertos fenómenos. En esta tesis introducimos varias contribuciones a la simulación del transporte de luz, dirigidas tanto a mejorar la eficiencia del cálculo de la misma, como a expandir el rango de sus aplicaciones prácticas. Prestamos especial atención a remover la asunción de una velocidad de propagación infinita, generalizando el transporte de luz a su estado transitorio. Respecto a la mejora de eficiencia, presentamos un método para calcular el flujo de luz que incide directamente desde luminarias en un sistema de generación de imágenes por Monte Carlo, reduciendo significativamente la variancia de las imágenes resultantes usando el mismo tiempo de ejecución. Asimismo, introducimos una técnica basada en estimación de densidad en el estado transitorio, que permite reusar mejor las muestras temporales en un medio parcipativo. En el dominio de las aplicaciones, también introducimos dos nuevos usos del transporte de luz: Un modelo para simular un tipo especial de pigmentos gonicromáticos que exhiben apariencia perlescente, con el objetivo de proveer una forma de edición intuitiva para manufactura, y una técnica de imagen sin línea de visión directa usando información del tiempo de vuelo de la luz, construida sobre un modelo de propagación de la luz basado en ondas.<br /

    Digital paleoart: reconstruction and restoration from laser-scanned fossils

    Get PDF
    For over two centuries, the science of paleontology has used imagery in order to help convey its ideas. This “paleoart” struggled at first to be accepted as a legitimate tool in science due to fears over scientific accuracy of reconstructions and restorations. At first, many restorations were made on the basis of poor fossil evidence. This was a result of the combination of the vastly incomplete nature of the fossil record compounded by the fact that too many artistic and scientific speculations became integrated into the artwork. With the 20th Century came the general acceptance of paleoart. Throughout the century it co-evolved with paleontology as new discoveries and ideas were founded. In the digital age, new technologies are being used not only for reconstructions and restorations, but for research purposes themselves. Digital technologies, including three-dimensional digitizers and computer animation, can now be used to create more accurate restorations than ever before, based directly on extremely detailed three-dimensional digitized fossils. In order to capitalize on this technology, and push the limits of what is possible in 21st Century paleoart, a restoration of the extinct crocodylian, Thoracosaurus neocesariensis, was created, by starting with a relatively complete digitized fossil source. The rest of the animal’s skeleton, its muscles, and outer skin were then restored, using techniques of computer animation, guided by the scientific literature and principles of comparative anatomy. Motion was then used to depict the behavior and habits of the animal, influenced by the biology of the restoration and observations of the animal’s closest living relatives. The goal was to restore an animal with an unprecedented amount of scientific accuracy by blending the techniques of rigorous scientific research with state-of-the-art computer generated imagery.M.S., Digital Media -- Drexel University, 201
    corecore