133 research outputs found

    4D Frequency Analysis of Computational Cameras for Depth of Field Extension

    Get PDF
    Depth of field (DOF), the range of scene depths that appear sharp in a photograph, poses a fundamental tradeoff in photography---wide apertures are important to reduce imaging noise, but they also increase defocus blur. Recent advances in computational imaging modify the acquisition process to extend the DOF through deconvolution. Because deconvolution quality is a tight function of the frequency power spectrum of the defocus kernel, designs with high spectra are desirable. In this paper we study how to design effective extended-DOF systems, and show an upper bound on the maximal power spectrum that can be achieved. We analyze defocus kernels in the 4D light field space and show that in the frequency domain, only a low-dimensional 3D manifold contributes to focus. Thus, to maximize the defocus spectrum, imaging systems should concentrate their limited energy on this manifold. We review several computational imaging systems and show either that they spend energy outside the focal manifold or do not achieve a high spectrum over the DOF. Guided by this analysis we introduce the lattice-focal lens, which concentrates energy at the low-dimensional focal manifold and achieves a higher power spectrum than previous designs. We have built a prototype lattice-focal lens and present extended depth of field results

    Filtered Blending und Floating Textures: Projektive Texturierung mit multiplen Bildern ohne Geisterartefakte

    Get PDF
    Whenever approximate 3D geometry is projectively texture-mapped from different directions simultaneously, annoyingly visible aliasing artifacts are the result. To prevent such ghosting in projective texturing and image-based rendering, we propose two different GPU-based rendering strategies: filtered blending and floating textures. Either approach is able to cope with imprecise 3D geometry as well as inexact camera calibration. Ghosting artifacts are effectively eliminated at real-time rendering frame rates on standard graphics hardware. With the proposed rendering techniques, better-quality rendering results are obtained from fewer images, coarser 3D geometry, and less accurately calibrated images.Jedesmal wenn eine grob approximierte Geometrie eines Objektes simultan, projektiv texturiert wird aus verschiedenen Ansichten, treten häßliche Aliasing-Artefakte auf. Um diese Geisterartefakte bei projektiver Texturierung und bildbasiertem Rendering zu verhindern, schlagen wir zwei verschiedene, GPU-basierte Renderingstrategien vor: Filtered Blending und Floating Textures. Beide beheben die Probleme ungenauer 3D Geometrie und inexakter Kamerakalibrierung. Geisterartefakte werden in Echtzeit effektiv entfernt unter Verwendung von standard Graphikhardware. Mittels der vorgeschlagenen Renderingtechniken erreichen wir eine deutlich höhere Qualität der Ausgabebilder, bei gleichzeitig weniger Bildern, gröberer 3D Geometrie und weniger akkurat kalibrierten Bildern

    Efficient multi-view ray tracing using edge detection and shader reuse

    Get PDF
    Stereoscopic rendering and 3D stereo displays are quickly becoming mainstream. The natural extension is autostereoscopic multi-view displays, which by the use of parallax barriers or lenticular lenses, can accommodate many simultaneous viewers without the need for active or passive glasses. As these displays, for the foreseeable future, will support only a rather limited number of views, there is a need for high-quality interperspective antialiasing. We present a specialized algorithm for efficient multi-view image generation from a camera line using ray tracing, which builds on previous methods for multi-dimensional adaptive sampling and reconstruction of light elds. We introduce multi-view silhouette edges to detect sharp geometrical discontinuities in the radiance function. These are used to significantly improve the quality of the reconstruction. In addition, we exploit shader coherence by computing analytical visibility between shading points and the camera line, and by sharing shading computations over the camera line

    Experimental Techniques and Image Reconstruction for Magnetic Resonance Imaging with Inhomogeneous Fields

    Get PDF
    University of Minnesota Ph.D. dissertation. August 2019. Major: Physics. Advisors: Michael Garwood, Geoffrey Ghose. 1 computer file (PDF); x, 107 pages.Magnetic resonance imaging is quite sensitive to experimental imperfections, necessitating extremely expensive electrical infrastructure and design requirements to permit high-quality experiments to be performed. By relaxing the sensitivity to imperfection, the entire system can be made less expensive and more accessible by shrinking the magnet generating the polarizing field. Decreasing the magnet size relative to the bore increases the polarizing field inhomogeneity. Moreover, current progress in MRI at ultra-high field (greater than or equal to 7T) is pushing the limits of conventional MRI methods, as field inhomogeneity increases with field strength. Hence, while many of the methods herein were developed with a small magnet in mind, they also apply at ultra-high field. The appeal of ultra-high field is increased detection sensitivity such that ever-smaller structures may be imaged in animals and humans. The primary goal of this work is to extend the current ability of magnetic resonance imaging to tolerate a large degree of spatial variation in both the transmit and polarizing fields involved. A novel method of decreasing radiofrequency pulse duration for multidimensional pulses is presented, rendering them more robust to field inhomogeneity. Furthermore, this method is leveraged to accelerate data acquisition. A new imaging sequence for quantitative determination of transverse relaxation rates is presented, which tolerates large variations in both the transmit and polarizing magnetic fields, as is often found when imaging with iron-oxide nanoparticles and/or at ultrahigh field. Finally, a computationally efficient approach for spatiotemporally-encoded image reconstruction is presented, which is inherently robust to field inhomogeneity

    OmniPhotos: Casual 360° VR Photography

    Get PDF
    Virtual reality headsets are becoming increasingly popular, yet it remains difficult for casual users to capture immersive 360° VR panoramas. State-of-the-art approaches require capture times of usually far more than a minute and are often limited in their supported range of head motion. We introduce OmniPhotos, a novel approach for quickly and casually capturing high-quality 360° panoramas with motion parallax. Our approach requires a single sweep with a consumer 360° video camera as input, which takes less than 3 seconds to capture with a rotating selfie stick or 10 seconds handheld. This is the fastest capture time for any VR photography approach supporting motion parallax by an order of magnitude. We improve the visual rendering quality of our OmniPhotos by alleviating vertical distortion using a novel deformable proxy geometry, which we fit to a sparse 3D reconstruction of captured scenes. In addition, the 360° input views significantly expand the available viewing area, and thus the range of motion, compared to previous approaches. We have captured more than 50 OmniPhotos and show video results for a large variety of scenes.This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 66599

    DEF: Deep Estimation of Sharp Geometric Features in 3D Shapes

    Full text link
    Sharp feature lines carry essential information about human-made objects, enabling compact 3D shape representations, high-quality surface reconstruction, and are a signal source for mesh processing. While extracting high-quality lines from noisy and undersampled data is challenging for traditional methods, deep learning-powered algorithms can leverage global and semantic information from the training data to aid in the process. We propose Deep Estimators of Features (DEFs), a learning-based framework for predicting sharp geometric features in sampled 3D shapes. Differently from existing data-driven methods, which reduce this problem to feature classification, we propose to regress a scalar field representing the distance from point samples to the closest feature line on local patches. By fusing the result of individual patches, we can process large 3D models, which are impossible to process for existing data-driven methods due to their size and complexity. Extensive experimental evaluation of DEFs is implemented on synthetic and real-world 3D shape datasets and suggests advantages of our image- and point-based estimators over competitor methods, as well as improved noise robustness and scalability of our approach

    Design and analysis of a two-dimensional camera array

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 153-158).I present the design and analysis of a two-dimensional camera array for virtual studio applications. It is possible to substitute conventional cameras and motion control devices with a real-time, light field camera array. I discuss a variety of camera architectures and describe a prototype system based on the "finite-viewpoints" design that allows multiple viewers to navigate virtual cameras in a dynamically changing light field captured in real time. The light field camera consists of 64 commodity video cameras connected to off-the-shelf computers. I employ a distributed rendering algorithm that overcomes the data bandwidth problems inherent in capturing light fields by selectively transmitting only those portions of the video streams that contribute to the desired virtual view. I also quantify the capabilities of a virtual camera rendered from a camera array in terms of the range of motion, range of rotation, and effective resolution. I compare these results to other configurations. From this analysis I provide a method for camera array designers to select and configure cameras to meet desired specifications. I demonstrate the system and the conclusions of the analysis with a number of examples that exploit dynamic light fields.by Jason Chieh-Sheng Yang.Ph.D

    LiveNVS: Neural View Synthesis on Live RGB-D Streams

    Full text link
    Existing real-time RGB-D reconstruction approaches, like Kinect Fusion, lack real-time photo-realistic visualization. This is due to noisy, oversmoothed or incomplete geometry and blurry textures which are fused from imperfect depth maps and camera poses. Recent neural rendering methods can overcome many of such artifacts but are mostly optimized for offline usage, hindering the integration into a live reconstruction pipeline. In this paper, we present LiveNVS, a system that allows for neural novel view synthesis on a live RGB-D input stream with very low latency and real-time rendering. Based on the RGB-D input stream, novel views are rendered by projecting neural features into the target view via a densely fused depth map and aggregating the features in image-space to a target feature map. A generalizable neural network then translates the target feature map into a high-quality RGB image. LiveNVS achieves state-of-the-art neural rendering quality of unknown scenes during capturing, allowing users to virtually explore the scene and assess reconstruction quality in real-time.Comment: main paper: 8 pages, total number of pages: 15, 13 figures, to be published in SIGGRAPH Asia 2023 Conference Papers; edits: link was fixe
    corecore