16,768 research outputs found
4D Temporally Coherent Light-field Video
Light-field video has recently been used in virtual and augmented reality
applications to increase realism and immersion. However, existing light-field
methods are generally limited to static scenes due to the requirement to
acquire a dense scene representation. The large amount of data and the absence
of methods to infer temporal coherence pose major challenges in storage,
compression and editing compared to conventional video. In this paper, we
propose the first method to extract a spatio-temporally coherent light-field
video representation. A novel method to obtain Epipolar Plane Images (EPIs)
from a spare light-field camera array is proposed. EPIs are used to constrain
scene flow estimation to obtain 4D temporally coherent representations of
dynamic light-fields. Temporal coherence is achieved on a variety of
light-field datasets. Evaluation of the proposed light-field scene flow against
existing multi-view dense correspondence approaches demonstrates a significant
improvement in accuracy of temporal coherence.Comment: Published in 3D Vision (3DV) 201
RLFC: Random Access Light Field Compression using Key Views and Bounded Integer Encoding
We present a new hierarchical compression scheme for encoding light field
images (LFI) that is suitable for interactive rendering. Our method (RLFC)
exploits redundancies in the light field images by constructing a tree
structure. The top level (root) of the tree captures the common high-level
details across the LFI, and other levels (children) of the tree capture
specific low-level details of the LFI. Our decompressing algorithm corresponds
to tree traversal operations and gathers the values stored at different levels
of the tree. Furthermore, we use bounded integer sequence encoding which
provides random access and fast hardware decoding for compressing the blocks of
children of the tree. We have evaluated our method for 4D two-plane
parameterized light fields. The compression rates vary from 0.08 - 2.5 bits per
pixel (bpp), resulting in compression ratios of around 200:1 to 20:1 for a PSNR
quality of 40 to 50 dB. The decompression times for decoding the blocks of LFI
are 1 - 3 microseconds per channel on an NVIDIA GTX-960 and we can render new
views with a resolution of 512X512 at 200 fps. Our overall scheme is simple to
implement and involves only bit manipulations and integer arithmetic
operations.Comment: Accepted for publication at Symposium on Interactive 3D Graphics and
Games (I3D '19
Motion compensated micro-CT reconstruction for in-situ analysis of dynamic processes
This work presents a framework to exploit the synergy between Digital Volume Correlation ( DVC) and iterative CT reconstruction to enhance the quality of high-resolution dynamic X-ray CT (4D-mu CT) and obtain quantitative results from the acquired dataset in the form of 3D strain maps which can be directly correlated to the material properties. Furthermore, we show that the developed framework is capable of strongly reducing motion artifacts even in a dataset containing a single 360 degrees rotation
Steered mixture-of-experts for light field images and video : representation and coding
Research in light field (LF) processing has heavily increased over the last decade. This is largely driven by the desire to achieve the same level of immersion and navigational freedom for camera-captured scenes as it is currently available for CGI content. Standardization organizations such as MPEG and JPEG continue to follow conventional coding paradigms in which viewpoints are discretely represented on 2-D regular grids. These grids are then further decorrelated through hybrid DPCM/transform techniques. However, these 2-D regular grids are less suited for high-dimensional data, such as LFs. We propose a novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE). Coherent areas in the higher-dimensional space are represented by single higher-dimensional entities, called kernels. These kernels hold spatially localized information about light rays at any angle arriving at a certain region. The global model consists thus of a set of kernels which define a continuous approximation of the underlying plenoptic function. We introduce the theory of SMoE and illustrate its application for 2-D images, 4-D LF images, and 5-D LF video. We also propose an efficient coding strategy to convert the model parameters into a bitstream. Even without provisions for high-frequency information, the proposed method performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images. In case of 5-D LF video, we observe superior decorrelation and coding performance with coding gains of a factor of 4x in bitrate for the same quality. At least equally important is the fact that our method inherently has desired functionality for LF rendering which is lacking in other state-of-the-art techniques: (1) full zero-delay random access, (2) light-weight pixel-parallel view reconstruction, and (3) intrinsic view interpolation and super-resolution
Light Field Denoising via Anisotropic Parallax Analysis in a CNN Framework
Light field (LF) cameras provide perspective information of scenes by taking
directional measurements of the focusing light rays. The raw outputs are
usually dark with additive camera noise, which impedes subsequent processing
and applications. We propose a novel LF denoising framework based on
anisotropic parallax analysis (APA). Two convolutional neural networks are
jointly designed for the task: first, the structural parallax synthesis network
predicts the parallax details for the entire LF based on a set of anisotropic
parallax features. These novel features can efficiently capture the high
frequency perspective components of a LF from noisy observations. Second, the
view-dependent detail compensation network restores non-Lambertian variation to
each LF view by involving view-specific spatial energies. Extensive experiments
show that the proposed APA LF denoiser provides a much better denoising
performance than state-of-the-art methods in terms of visual quality and in
preservation of parallax details
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
Light field imaging extends the traditional photography by capturing both
spatial and angular distribution of light, which enables new capabilities,
including post-capture refocusing, post-capture aperture control, and depth
estimation from a single shot. Micro-lens array (MLA) based light field cameras
offer a cost-effective approach to capture light field. A major drawback of MLA
based light field cameras is low spatial resolution, which is due to the fact
that a single image sensor is shared to capture both spatial and angular
information. In this paper, we present a learning based light field enhancement
approach. Both spatial and angular resolution of captured light field is
enhanced using convolutional neural networks. The proposed method is tested
with real light field data captured with a Lytro light field camera, clearly
demonstrating spatial and angular resolution improvement
Generation of Sound Bullets with a Nonlinear Acoustic Lens
Acoustic lenses are employed in a variety of applications, from biomedical
imaging and surgery, to defense systems, but their performance is limited by
their linear operational envelope and complexity. Here we show a dramatic
focusing effect and the generation of large amplitude, compact acoustic pulses
(sound bullets) in solid and fluid media, enabled by a tunable, highly
nonlinear acoustic lens. The lens consists of ordered arrays of granular
chains. The amplitude, size and location of the sound bullets can be controlled
by varying static pre-compression on the chains. We support our findings with
theory, numerical simulations, and corroborate the results experimentally with
photoelasticity measurements. Our nonlinear lens makes possible a qualitatively
new way of generating high-energy acoustic pulses, enabling, for example,
surgical control of acoustic energy.Comment: 19 pages, 7 figures, includes supplementary informatio
- …