8 research outputs found
Volumetric performance capture from minimal camera viewpoints
We present a convolutional autoencoder that enables high fidelity volumetric
reconstructions of human performance to be captured from multi-view video
comprising only a small set of camera views. Our method yields similar
end-to-end reconstruction error to that of a probabilistic visual hull computed
using significantly more (double or more) viewpoints. We use a deep prior
implicitly learned by the autoencoder trained over a dataset of view-ablated
multi-view video footage of a wide range of subjects and actions. This opens up
the possibility of high-end volumetric performance capture in on-set and
prosumer scenarios where time or cost prohibit a high witness camera count
Optimized Camera Handover Scheme in Free Viewpoint Video Streaming
Free-viewpoint video (FVV) is a promising approach that allows users to control their viewpoint and generate virtual views from any desired perspective. The individual user viewpoints are synthetized from two or more camera streams and correspondent depth sequences. In case of continuous viewpoint changes, the camera inputs of the view synthesis process must be changed in a seamless way, in order to avoid the starvation of the viewpoint synthesizer algorithm. Starvation occurs when the desired user viewpoint cannot be synthetized with the currently streamed camera views, thus the FVV playout interrupts. In this paper we proposed three camera handover schemes (TCC, MA, SA) based on viewpoint prediction in order to minimize the probability of playout stalls and find the tradeoff between the image quality and the camera handover frequency. Our simulation results show that the introduced camera switching methods can reduce the handover frequency with more than 40%, hence the viewpoint synthesis starvation and the playout interruption can be minimized. By providing seamless viewpoint changes, the quality of experience can be significantly improved, making the new FVV service more attractive in the future
Advanced Free Viewpoint Video Streaming Techniques
Free-viewpoint video is a new type of interactive multimedia service allowing users to control their viewpoint and generate new views of a dynamic scene from any perspective. The uniquely generated and displayed views are composed from two or more high bitrate camera streams that must be delivered to the users depending on their continuously changing perspective. Due to significant network and computational resource requirements, we proposed scalable viewpoint generation and delivery schemes based on multicast forwarding and distributed approach. Our aim was to find the optimal deployment locations of the distributed viewpoint synthesis processes in the network topology by allowing network nodes to act as proxy servers with caching and viewpoint synthesis functionalities. Moreover, a predictive multicast group management scheme was introduced in order to provide all camera views that may be requested in the near future and prevent the viewpoint synthesizer algorithm from remaining without camera streams. The obtained results showed that even 42% traffic decrease can be realized using distributed viewpoint synthesis and the probability of viewpoint synthesis starvation can be also significantly reduced in future free viewpoint video services
Temporally Coherent General Dynamic Scene Reconstruction
Existing techniques for dynamic scene reconstruction from multiple
wide-baseline cameras primarily focus on reconstruction in controlled
environments, with fixed calibrated cameras and strong prior constraints. This
paper introduces a general approach to obtain a 4D representation of complex
dynamic scenes from multi-view wide-baseline static or moving cameras without
prior knowledge of the scene structure, appearance, or illumination.
Contributions of the work are: An automatic method for initial coarse
reconstruction to initialize joint estimation; Sparse-to-dense temporal
correspondence integrated with joint multi-view segmentation and reconstruction
to introduce temporal coherence; and a general robust approach for joint
segmentation refinement and dense reconstruction of dynamic scenes by
introducing shape constraint. Comparison with state-of-the-art approaches on a
variety of complex indoor and outdoor scenes, demonstrates improved accuracy in
both multi-view segmentation and dense reconstruction. This paper demonstrates
unsupervised reconstruction of complete temporally coherent 4D scene models
with improved non-rigid object segmentation and shape reconstruction and its
application to free-viewpoint rendering and virtual reality.Comment: Submitted to IJCV 2019. arXiv admin note: substantial text overlap
with arXiv:1603.0338
Intrinsic Textures for Relightable Free-Viewpoint Video
This paper presents an approach to estimate the intrinsic texture properties (albedo, shading, normal) of scenes from multiple view acquisition under unknown illumination conditions. We introduce the concept of intrinsic textures, which are pixel-resolution surface textures representing the intrinsic appearance parameters of a scene. Unlike previous video relighting methods, the approach does not assume regions of uniform albedo, which makes it applicable to richly textured scenes. We show that intrinsic image methods can be used to refine an initial, low-frequency shading estimate based on a global lighting reconstruction from an original texture and coarse scene geometry in order to resolve the inherent global ambiguity in shading. The method is applied to relighting of free-viewpoint rendering from multiple view video capture. This demonstrates relighting with reproduction of fine surface detail. Quantitative evaluation on synthetic models with textured appearance shows accurate estimation of intrinsic surface reflectance properties. © 2014 Springer International Publishing