121 research outputs found
Tackling 3D ToF Artifacts Through Learning and the FLAT Dataset
Scene motion, multiple reflections, and sensor noise introduce artifacts in
the depth reconstruction performed by time-of-flight cameras. We propose a
two-stage, deep-learning approach to address all of these sources of artifacts
simultaneously. We also introduce FLAT, a synthetic dataset of 2000 ToF
measurements that capture all of these nonidealities, and allows to simulate
different camera hardware. Using the Kinect 2 camera as a baseline, we show
improved reconstruction errors over state-of-the-art methods, on both simulated
and real data.Comment: ECCV 201
Femto-Photography: Capturing Light in Motion
We show a technique to capture ultrafast movies of light in motion and synthesize physically valid visualizations. The effective exposure time for each frame is under two picoseconds (ps). Capturing a 2D video with this time resolution is highly challenging, given the extermely low SNR associated with a picosecond exposure time, as well as the absence of 2D cameras that can provide such a shutter speed. We re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak tube, and we introduce reconstruction methods to visualize propagation of light pulses through macroscopic scenes. Capturing two-dimensional movies with picosecond resolution, we observe many interesting and complex light transport effects, including multibounce scattering, delayed mirror reflections, and subsurface scattering. We notice that the time instances recorded by the camera, i.e. “camera time” is different from the the time of the events as they happen locally at the scene location, i.e. world time. We introduce a notion of time warp between the two space time coordinate systems, and rewarp the space-time movie for a different perspective
A Biophysically-Based Model of the Optical Properties of Skin Aging
This paper presents a time-varying, multi-layered biophysically-based model of the optical properties of human skin, suitable for simulating appearance changes due to aging. We have identified the key aspects that cause such changes, both in terms of the structure of skin and its chromophore concentrations, and rely on the extensive medical and optical tissue literature for accurate data. Our model can be expressed in terms of biophysical parameters, optical parameters commonly used in graphics and rendering (such as spectral absorption and scattering coefficients), or more intuitively with higher-level parameters such as age, gender, skin care or skin type. It can be used with any rendering algorithm that uses diffusion profiles, and it allows to automatically simulate different types of skin at different stages of aging, avoiding the need for artistic input or costly capture processes
Differentiable Transient Rendering
Recent differentiable rendering techniques have become key tools to tackle many inverse problems in graphics and vision. Existing models, however, assume steady-state light transport, i.e., infinite speed of light. While this is a safe assumption for many applications, recent advances in ultrafast imaging leverage the wealth of information that can be extracted from the exact time of flight of light. In this context, physically-based transient rendering allows to efficiently simulate and analyze light transport considering that the speed of light is indeed finite. In this paper, we introduce a novel differentiable transient rendering framework, to help bring the potential of differentiable approaches into the transient regime. To differentiate the transient path integral we need to take into account that scattering events at path vertices are no longer independent; instead, tracking the time of flight of light requires treating such scattering events at path vertices jointly as a multidimensional, evolving manifold. We thus turn to the generalized transport theorem, and introduce a novel correlated importance term, which links the time-integrated contribution of a path to its light throughput, and allows us to handle discontinuities in the light and sensor functions. Last, we present results in several challenging scenarios where the time of flight of light plays an important role such as optimizing indices of refraction, non-line-of-sight tracking with nonplanar relay walls, and non-line-of-sight tracking around two corners
Measurement method of optical properties of ex vivo biological tissues of rats in the near-infrared range
An optical fiber-based supercontinuum setup and a custom-made spectrophotometer that can measure spectra from 1100 to 2300 nm, are used to describe attenuation properties from different ex vivo rat tissues. Our method is able to differentiate between scattering and absorption coefficients in biological tissues. Theoretical assumptions combined with experimental measurements demonstrate that, in this infrared range, tissue attenuation and absorption can be accurately measured, and scattering can be described as the difference between both magnitudes. Attenuation, absorption, and scattering spectral coefficients of heart, brain, spleen, retina, and kidney are given by applying these theoretical and experimental methods. Light through these tissues is affected by high scattering, resulting in multiple absorption events, and longer wavelengths should be used to obtain lower attenuation values. It can be observed that the absorption coefficient has a similar behavior in the samples under study, with two main zones of absorption due to the water absorption bands at 1450 and 1950 nm, and with different absolute absorption values depending on the constituents of each tissue. The scattering coefficient can be determined, showing slight differences between retina and brain samples, and among heart, spleen and kidney tissues
DeepToF: Off-the-shelf real-time correction of multipath interference in time-of-flight imaging
Time-of-flight (ToF) imaging has become a widespread technique for depth estimation, allowing affordable off-the-shelf cameras to provide depth maps in real time. However, multipath interference (MPI) resulting from indirect illumination significantly degrades the captured depth. Most previous works have tried to solve this problem by means of complex hardware modifications or costly computations. In this work, we avoid these approaches and propose a new technique to correct errors in depth caused by MPI, which requires no camera modifications and takes just 10 milliseconds per frame. Our observations about the nature of MPI suggest that most of its information is available in image space; this allows us to formulate the depth imaging process as a spatially-varying convolution and use a convolutional neural network to correct MPI errors. Since the input and output data present similar structure, we base our network on an autoencoder, which we train in two stages. First, we use the encoder (convolution filters) to learn a suitable basis to represent MPI-corrupted depth images; then, we train the decoder (deconvolution filters) to correct depth from synthetic scenes, generated by using a physically-based, time-resolved renderer. This approach allows us to tackle a key problem in ToF, the lack of ground-truth data, by using a large-scale captured training set with MPI-corrupted depth to train the encoder, and a smaller synthetic training set with ground truth depth to train the decoder stage of the network. We demonstrate and validate our method on both synthetic and real complex scenarios, using an off-the-shelf ToF camera, and with only the captured, incorrect depth as input
A 4D Light-Field Dataset and CNN Architectures for Material Recognition
We introduce a new light-field dataset of materials, and take advantage of
the recent success of deep learning to perform material recognition on the 4D
light-field. Our dataset contains 12 material categories, each with 100 images
taken with a Lytro Illum, from which we extract about 30,000 patches in total.
To the best of our knowledge, this is the first mid-size dataset for
light-field images. Our main goal is to investigate whether the additional
information in a light-field (such as multiple sub-aperture views and
view-dependent reflectance effects) can aid material recognition. Since
recognition networks have not been trained on 4D images before, we propose and
compare several novel CNN architectures to train on light-field images. In our
experiments, the best performing CNN architecture achieves a 7% boost compared
with 2D image classification (70% to 77%). These results constitute important
baselines that can spur further research in the use of CNNs for light-field
applications. Upon publication, our dataset also enables other novel
applications of light-fields, including object detection, image segmentation
and view interpolation.Comment: European Conference on Computer Vision (ECCV) 201
Progressive Transient Photon Beams
In this work we introduce a novel algorithm for transient rendering in
participating media. Our method is consistent, robust, and is able to generate
animations of time-resolved light transport featuring complex caustic light
paths in media. We base our method on the observation that the spatial
continuity provides an increased coverage of the temporal domain, and
generalize photon beams to transient-state. We extend the beam steady-state
radiance estimates to include the temporal domain. Then, we develop a
progressive version of spatio-temporal density estimations, that converges to
the correct solution with finite memory requirements by iteratively averaging
several realizations of independent renders with a progressively reduced kernel
bandwidth. We derive the optimal convergence rates accounting for space and
time kernels, and demonstrate our method against previous consistent transient
rendering methods for participating media
- …