10 research outputs found

    Quasi light fields: extending the light field to coherent radiation

    Get PDF
    Imaging technologies such as dynamic viewpoint generation are engineered for incoherent radiation using the traditional light field, and for coherent radiation using electromagnetic field theory. We present a model of coherent image formation that strikes a balance between the utility of the light field and the comprehensive predictive power of Maxwell's equations. We synthesize research in optics and signal processing to formulate, capture, and form images from quasi light fields, which extend the light field from incoherent to coherent radiation. Our coherent cameras generalize the classic beamforming algorithm in sensor array processing, and invite further research on alternative notions of image formation.Comment: This paper was published in JOSA A. The final version is available on the OSA website: http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-26-9-205

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Methods for transform, analysis and rendering of complete light representations

    Get PDF
    Recent advances in digital holography, optical engineering and computer graphics have opened up the possibility of full parallax, three dimensional displays. The premises of these rendering systems are however somewhat different from traditional imaging and video systems. Instead of rendering an image of the scene, the complete light distribution must be computed. In this thesis we discuss some different methods regarding processing and rendering of two well known full light representations: the light field and the hologram. A light field transform approach, based on matrix optics operators, is introduced. Thereafter we discuss the relationship between the light field and the hologram representations. The final part of the thesis is concerned with hologram and wave field synthesis. We present two different methods. First, a GPU accelerated approach to rendering point-based models is introduced. Thereafter, we develop a Fourier rendering approach capable of generating angular spectra of triangular mesh models.Aktuelle Fortschritte in den Bereichen der digitalen Holographie, optischen Technik und Computergrafik ermöglichen die Entwicklung von vollwertigen 3D-Displays. Diese Systeme sind allerdings auf Eingangsdaten angewiesen, die sich von denen traditioneller Videosysteme unterscheiden. Anstatt für die Visualisierung ein zweidimensionales Abbild einer Szene zu erstellen, muss die vollständige Verteilung des Lichts berechnet werden. In dieser Dissertation betrachten wir verschiedene Methoden, um dies für zwei verschiedene gebräuchliche Darstellungen der Lichtverteilung zu erreichen: Lichtfeld und Hologramm. Wir stellen dafür zunächst eine Methode vor, die Operatoren der Strahlenoptik auf Lichtfelder anzuwenden, und diskutieren daraufhin, wie die Darstellung als Lichtfeld mit der Darstellung als Hologramm zusammenhängt. Abschliessend wird die praktische Berechnung von Hologrammen und Wellenfeldern behandelt, wobei wir zwei verschiedene Ansätze untersuchen. Im ersten Ansatz werden Wellenfelder aus punktbasierten Modellen von Objekten erzeugt, unter Einsatz moderner Grafikhardware zur Optimierung der Rechenzeit. Der zweite Ansatz, Fourier-Rendering, ermöglicht die Generierung von Hologrammen aus Oberflächen, die durch Dreiecksnetze beschrieben sind

    TOWARDS EFFECTIVE DISPLAYS FOR VIRTUAL AND AUGMENTED REALITY

    Get PDF
    Virtual and augmented reality (VR and AR) are becoming increasingly accessible and useful nowadays. This dissertation focuses on several aspects of designing effective displays for VR and AR. Compared to conventional desktop displays, VR and AR displays can better engage the human peripheral vision. This provides an opportunity for more information to be perceived. To fully leverage the human visual system, we need to take into account how the human visual system perceives things differently in the periphery than in the fovea. By investigating the relationship of the perception time and eccentricity, we deduce a scaling function which facilitates content in the far periphery to be perceived as efficiently as in the central vision. AR overlays additional information on the real environment. This is useful in a number of fields, including surgery, where time-critical information is key. We present our medical AR system that visualizes the occluded catheter in the external ventricular drainage (EVD) procedure. We develop an accurate and efficient catheter tracking method that requires minimal changes to the existing medical equipment. The AR display projects a virtual image of the catheter overlaid on the occluded real catheter to depict its real-time position. Our system can make the risky EVD procedure much safer. Existing VR and AR displays support a limited number of focal distances, leading to vergence-accommodation conflict. Holographic displays can address this issue. In this dissertation, we explore the design and development of nanophotonic phased array (NPA) as a special class of holographic displays. NPAs have the advantage of being compact and support very high refresh rates. However, the use of the thermo-optic effect for phase modulation renders them susceptible to the thermal proximity effect. We study how the proximity effect impacts the images formed on NPAs. We then propose several novel algorithms to compensate for the thermal proximity effect on NPAs and compare their effectiveness and computational efficiency. Computer-generated holography (CGH) has traditionally focused on 2D images and 3D images in the form of meshes and point clouds. However, volumetric data can also benefit from CGH. One of the challenges in the use of volumetric data sources in CGH is the computational complexity needed to calculate the holograms of volumetric data. We propose a new method that achieves a significant speedup compared to existing holographic volume rendering methods

    Advanced methods for relightable scene representations in image space

    Get PDF
    The realistic reproduction of visual appearance of real-world objects requires accurate computer graphics models that describe the optical interaction of a scene with its surroundings. Data-driven approaches that model the scene globally as a reflectance field function in eight parameters deliver high quality and work for most material combinations, but are costly to acquire and store. Image-space relighting, which constrains the application to create photos with a virtual, fix camera in freely chosen illumination, requires only a 4D data structure to provide full fidelity. This thesis contributes to image-space relighting on four accounts: (1) We investigate the acquisition of 4D reflectance fields in the context of sampling and propose a practical setup for pre-filtering of reflectance data during recording, and apply it in an adaptive sampling scheme. (2) We introduce a feature-driven image synthesis algorithm for the interpolation of coarsely sampled reflectance data in software to achieve highly realistic images. (3) We propose an implicit reflectance data representation, which uses a Bayesian approach to relight complex scenes from the example of much simpler reference objects. (4) Finally, we construct novel, passive devices out of optical components that render reflectance field data in real-time, shaping the incident illumination into the desired imageDie realistische Wiedergabe der visuellen Erscheinung einer realen Szene setzt genaue Modelle aus der Computergraphik für die Interaktion der Szene mit ihrer Umgebung voraus. Globale Ansätze, die das Verhalten der Szene insgesamt als Reflektanzfeldfunktion in acht Parametern modellieren, liefern hohe Qualität für viele Materialtypen, sind aber teuer aufzuzeichnen und zu speichern. Verfahren zur Neubeleuchtung im Bildraum schränken die Anwendbarkeit auf fest gewählte Kameras ein, ermöglichen aber die freie Wahl der Beleuchtung, und erfordern dadurch lediglich eine 4D - Datenstruktur für volle Wiedergabetreue. Diese Arbeit enthält vier Beiträge zu diesem Thema: (1) wir untersuchen die Aufzeichnung von 4D Reflektanzfeldern im Kontext der Abtasttheorie und schlagen einen praktischen Aufbau vor, der Reflektanzdaten bereits während der Messung vorfiltert. Wir verwenden ihn in einem adaptiven Abtastschema. (2) Wir führen einen merkmalgesteuerten Bildsynthesealgorithmus für die Interpolation von grob abgetasteten Reflektanzdaten ein. (3) Wir schlagen eine implizite Beschreibung von Reflektanzdaten vor, die mit einem Bayesschen Ansatz komplexe Szenen anhand des Beispiels eines viel einfacheren Referenzobjektes neu beleuchtet. (4) Unter der Verwendung optischer Komponenten schaffen wir passive Aufbauten zur Darstellung von Reflektanzfeldern in Echtzeit, indem wir einfallende Beleuchtung direkt in das gewünschte Bild umwandeln

    Generating Pictures from Waves: Aspects of Image Formation

    Get PDF
    Thesis Supervisor: Gregory W. Wornell Title: Professor of Electrical Engineering and Computer ScienceThe research communities, technologies, and tools for image formation are diverse. On the one hand, computer vision and graphics researchers analyze incoherent light using coarse geometric approximations from optics. On the other hand, array signal processing and acoustics researchers analyze coherent sound waves using stochastic estimation theory and diffraction formulas from physics. The ability to inexpensively fabricate analog circuitry and digital logic for millimeter-wave radar and ultrasound creates opportunities in comparing diverse perspectives on image formation, and presents challenges in implementing imaging systems that scale in size. We present algorithms, architectures, and abstractions for image formation that relate the different communities, technologies, and tools. We address practical technical challenges in operating millimeter-wave radar and ultrasound systems in the presence of phase noise and scattering. We model a broad class of physical phenomena with isotropic point sources. We show that the optimal source location estimator for coherent waves reduces to processing an image produced by a conventional camera, provided the sources are wellseparated relative to the system resolution, and in the limit of small wavelength and globally incoherent light. We introduce quasi light fields to generalize the incoherent image formation process to coherent waves, offering resolution tradeoffs that surpass the traditional Fourier uncertainty principle by leveraging time-frequency distributions. We show that the number of sensors in a coherent imaging array defines a stable operating point relative to the phase noise. We introduce a digital phase tightening algorithm to reduce phase noise. We present a system identification framework for multiple-input multiple-output (MIMO) ultrasound imaging that generalizes existing approaches with time-varying filters. Our theoretical results enable the application of traditional techniques in incoherent imaging to coherent imaging, and vice versa. Our practical results suggest a methodology for designing millimeter-wave imaging systems. Our conclusions reinforce architectural principles governing transmitter and receiver design, the role of analog and digital circuity, and the tradeoff between data rate and data precision.Microsoft Research, MIT Lincoln Laboratory, and the C2S2 Focus Center, one of six research centers funded under the Focus Center Research Program (FCRP), a Semiconductor Research Corporation entity

    A Bidirectional Light Field- Hologram Transform

    No full text
    In this paper, we propose a novel framework to represent visual information. Extending the notion of conventional image-based rendering, our framework makes joint use of both light fields and holograms as complementary representations. We demonstrate how light fields can be transformed into holograms, and vice versa. By exploiting the advantages of either representation, our proposed dual representation and processing pipeline is able to overcome the limitations inherent to light fields and holograms alone. We show various examples from synthetic and real light fields to digital holograms demonstrating advantages of either representation, such as speckle-free images, ghosting-free images, aliasing-free recording, natural light recording, aperture-dependent effects and real-time rendering which can all be achieved using the same framework. Capturing holograms under white light illumination is one promising application for future work. Categories and Subject Descriptors (according to ACM CCS): I.3.0 [Computer Graphics]: General 1
    corecore