7,968 research outputs found
Image-guided ToF depth upsampling: a survey
Recently, there has been remarkable growth of interest in the development and applications of time-of-flight (ToF) depth cameras. Despite the permanent improvement of their characteristics, the practical applicability of ToF cameras is still limited by low resolution and quality of depth measurements. This has motivated many researchers to combine ToF cameras with other sensors in order to enhance and upsample depth images. In this paper, we review the approaches that couple ToF depth images with high-resolution optical images. Other classes of upsampling methods are also briefly discussed. Finally, we provide an overview of performance evaluation tests presented in the related studies
A Deep Primal-Dual Network for Guided Depth Super-Resolution
In this paper we present a novel method to increase the spatial resolution of
depth images. We combine a deep fully convolutional network with a non-local
variational method in a deep primal-dual network. The joint network computes a
noise-free, high-resolution estimate from a noisy, low-resolution input depth
map. Additionally, a high-resolution intensity image is used to guide the
reconstruction in the network. By unrolling the optimization steps of a
first-order primal-dual algorithm and formulating it as a network, we can train
our joint method end-to-end. This not only enables us to learn the weights of
the fully convolutional network, but also to optimize all parameters of the
variational method and its optimization procedure. The training of such a deep
network requires a large dataset for supervision. Therefore, we generate
high-quality depth maps and corresponding color images with a physically based
renderer. In an exhaustive evaluation we show that our method outperforms the
state-of-the-art on multiple benchmarks.Comment: BMVC 201
Semantically Guided Depth Upsampling
We present a novel method for accurate and efficient up- sampling of sparse
depth data, guided by high-resolution imagery. Our approach goes beyond the use
of intensity cues only and additionally exploits object boundary cues through
structured edge detection and semantic scene labeling for guidance. Both cues
are combined within a geodesic distance measure that allows for
boundary-preserving depth in- terpolation while utilizing local context. We
model the observed scene structure by locally planar elements and formulate the
upsampling task as a global energy minimization problem. Our method determines
glob- ally consistent solutions and preserves fine details and sharp depth
bound- aries. In our experiments on several public datasets at different levels
of application, we demonstrate superior performance of our approach over the
state-of-the-art, even for very sparse measurements.Comment: German Conference on Pattern Recognition 2016 (Oral
GI-1.0: A Fast and Scalable Two-level Radiance Caching Scheme for Real-time Global Illumination
Real-time global illumination is key to enabling more dynamic and physically
realistic worlds in performance-critical applications such as games or any
other applications with real-time constraints.Hardware-accelerated ray tracing
in modern GPUs allows arbitrary intersection queries against the geometry,
making it possible to evaluate indirect lighting entirely at runtime. However,
only a small number of rays can be traced at each pixel to maintain high
framerates at ever-increasing image resolutions. Existing solutions, such as
probe-based techniques, approximate the irradiance signal at the cost of a few
rays per frame but suffer from a lack of details and slow response times to
changes in lighting. On the other hand, reservoir-based resampling techniques
capture much more details but typically suffer from poorer performance and
increased amounts of noise, making them impractical for the current generation
of hardware and gaming consoles. To find a balance that achieves high lighting
fidelity while maintaining a low runtime cost, we propose a solution that
dynamically estimates global illumination without needing any content
preprocessing, thus enabling easy integration into existing real-time rendering
pipelines
- …