123 research outputs found

    Methods for transform, analysis and rendering of complete light representations

    Get PDF
    Recent advances in digital holography, optical engineering and computer graphics have opened up the possibility of full parallax, three dimensional displays. The premises of these rendering systems are however somewhat different from traditional imaging and video systems. Instead of rendering an image of the scene, the complete light distribution must be computed. In this thesis we discuss some different methods regarding processing and rendering of two well known full light representations: the light field and the hologram. A light field transform approach, based on matrix optics operators, is introduced. Thereafter we discuss the relationship between the light field and the hologram representations. The final part of the thesis is concerned with hologram and wave field synthesis. We present two different methods. First, a GPU accelerated approach to rendering point-based models is introduced. Thereafter, we develop a Fourier rendering approach capable of generating angular spectra of triangular mesh models.Aktuelle Fortschritte in den Bereichen der digitalen Holographie, optischen Technik und Computergrafik ermöglichen die Entwicklung von vollwertigen 3D-Displays. Diese Systeme sind allerdings auf Eingangsdaten angewiesen, die sich von denen traditioneller Videosysteme unterscheiden. Anstatt fĂŒr die Visualisierung ein zweidimensionales Abbild einer Szene zu erstellen, muss die vollstĂ€ndige Verteilung des Lichts berechnet werden. In dieser Dissertation betrachten wir verschiedene Methoden, um dies fĂŒr zwei verschiedene gebrĂ€uchliche Darstellungen der Lichtverteilung zu erreichen: Lichtfeld und Hologramm. Wir stellen dafĂŒr zunĂ€chst eine Methode vor, die Operatoren der Strahlenoptik auf Lichtfelder anzuwenden, und diskutieren daraufhin, wie die Darstellung als Lichtfeld mit der Darstellung als Hologramm zusammenhĂ€ngt. Abschliessend wird die praktische Berechnung von Hologrammen und Wellenfeldern behandelt, wobei wir zwei verschiedene AnsĂ€tze untersuchen. Im ersten Ansatz werden Wellenfelder aus punktbasierten Modellen von Objekten erzeugt, unter Einsatz moderner Grafikhardware zur Optimierung der Rechenzeit. Der zweite Ansatz, Fourier-Rendering, ermöglicht die Generierung von Hologrammen aus OberflĂ€chen, die durch Dreiecksnetze beschrieben sind

    Deep imaging inside scattering media through virtual spatiotemporal wavefront shaping

    Full text link
    The multiple scattering of light makes materials opaque and obstructs imaging. Optimized wavefronts can overcome scattering to focus but typically require restrictive guidestars and only work within an isoplanatic patch. Focusing by lenses and wavefront shaping by spatial light modulators also limit the imaging volume and update speed. Here, we introduce scattering matrix tomography (SMT): use the measured scattering matrix of the sample to construct its volumetric image by scanning a confocal spatiotemporal focus with input and output wavefront correction for every isoplanatic patch, dispersion compensation, and index-mismatch correction--all performed digitally during post-processing without a physical guidestar. The digital focusing offers a large depth of field without constraint by the focal plane's Rayleigh range, and the digital wavefront correction enables image optimization with fast updates unrestricted by the speed of the hardware. We demonstrate SMT with sub-micron diffraction-limited lateral resolution and one-micron bandwidth-limited axial resolution at one millimeter beneath ex vivo mouse brain tissue and inside a dense colloid, where conventional imaging methods fail due to the overwhelming multiple scattering. SMT translates deep-tissue imaging into a computational reconstruction and optimization problem. It is noninvasive and label-free, with prospective applications in medical diagnosis, biological science, colloidal physics, and device inspection

    Mobile three-dimensional city maps

    Get PDF
    Maps are visual representations of environments and the objects within, depicting their spatial relations. They are mainly used in navigation, where they act as external information sources, supporting observation and decision making processes. Map design, or the art-science of cartography, has led to simplification of the environment, where the naturally three-dimensional environment has been abstracted to a two-dimensional representation, populated with simple geometrical shapes and symbols. However, abstract representation requires a map reading ability. Modern technology has reached the level where maps can be expressed in digital form, having selectable, scalable, browsable and updatable content. Maps may no longer even be limited to two dimensions, nor to an abstract form. When a real world based virtual environment is created, a 3D map is born. Given a realistic representation, would the user no longer need to interpret the map, and be able to navigate in an inherently intuitive manner? To answer this question, one needs a mobile test platform. But can a 3D map, a resource hungry real virtual environment, exist on such resource limited devices? This dissertation approaches the technical challenges posed by mobile 3D maps in a constructive manner, identifying the problems, developing solutions and providing answers by creating a functional system. The case focuses on urban environments. First, optimization methods for rendering large, static 3D city models are researched and a solution provided by combining visibility culling, level-of-detail management and out-of-core rendering, suited for mobile 3D maps. Then, the potential of mobile networking is addressed, developing efficient and scalable methods for progressive content downloading and dynamic entity management. Finally, a 3D navigation interface is developed for mobile devices, and the research validated with measurements and field experiments. It is found that near realistic mobile 3D city maps can exist in current mobile phones, and the rendering rates are excellent in 3D hardware enabled devices. Such 3D maps can also be transferred and rendered on-the-fly sufficiently fast for navigation use over cellular networks. Real world entities such as pedestrians or public transportation can be tracked and presented in a scalable manner. Mobile 3D maps are useful for navigation, but their usability depends highly on interaction methods - the potentially intuitive representation does not imply, for example, faster navigation than with a professional 2D street map. In addition, the physical interface limits the usability

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Roadmap on holography

    Get PDF
    From its inception holography has proven an extremely productive and attractive area of research. While specific technical applications give rise to 'hot topics', and three-dimensional (3D) visualisation comes in and out of fashion, the core principals involved continue to lead to exciting innovations in a wide range of areas. We humbly submit that it is impossible, in any journal document of this type, to fully reflect current and potential activity; however, our valiant contributors have produced a series of documents that go no small way to neatly capture progress across a wide range of core activities. As editors we have attempted to spread our net wide in order to illustrate the breadth of international activity. In relation to this we believe we have been at least partially successful.This work was supported by Ministerio de EconomĂ­a, Industria y Competitividad (Spain) under projects FIS2017-82919-R (MINECO/AEI/FEDER, UE) and FIS2015-66570-P (MINECO/FEDER), and by Generalitat Valenciana (Spain) under project PROMETEO II/2015/015

    Roadmap on holography

    Get PDF
    From its inception holography has proven an extremely productive and attractive area of research. While specific technical applications give rise to 'hot topics', and three-dimensional (3D) visualisation comes in and out of fashion, the core principals involved continue to lead to exciting innovations in a wide range of areas. We humbly submit that it is impossible, in any journal document of this type, to fully reflect current and potential activity; however, our valiant contributors have produced a series of documents that go no small way to neatly capture progress across a wide range of core activities. As editors we have attempted to spread our net wide in order to illustrate the breadth of international activity. In relation to this we believe we have been at least partially successful
    • 

    corecore