48,923 research outputs found
Scalable virtual viewpoint image synthesis for multiple camera environments
One of the main aims of emerging audio-visual (AV) applications is to provide interactive navigation within a captured event or scene. This paper presents a view synthesis algorithm that provides a scalable and flexible approach to virtual viewpoint synthesis in multiple camera environments. The multi-view synthesis (MVS) process consists of four different phases that are described in detail: surface identification, surface selection, surface boundary blending and surface reconstruction. MVS view synthesis identifies and selects only the best quality surface areas from the set of available reference images, thereby reducing perceptual errors in virtual view reconstruction. The approach is camera setup independent and scalable as virtual views can be created given 1 to N of the available video inputs. Thus, MVS provides interactive AV applications with a means to handle scenarios where camera inputs increase or decrease over time
Multiple image view synthesis for free viewpoint video applications
Interactive audio-visual (AV) applications such as free viewpoint video (FVV) aim to enable unrestricted spatio-temporal navigation within multiple camera environments. Current virtual viewpoint view synthesis solutions for FVV are either purely image-based implying large information redundancy; or involve reconstructing complex 3D models of the scene. In this paper we present a new multiple image view synthesis algorithm that only requires camera parameters and disparity maps. The multi-view synthesis (MVS) approach can be used in any multi-camera environment and is scalable as virtual views can be created given 1 to N of the available video inputs, providing a means to gracefully handle scenarios where camera inputs decrease or increase over time. The algorithm identifies and selects only the best quality surface areas from available reference images, thereby reducing perceptual errors in virtual view reconstruction. Experimental results are presented and verified using both objective (PSNR) and subjective comparisons
Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing
Free-viewpoint video conferencing allows a participant to observe the remote
3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint
image is commonly synthesized using two pairs of transmitted texture and depth
maps from two neighboring captured viewpoints via depth-image-based rendering
(DIBR). To maintain high quality of synthesized images, it is imperative to
contain the adverse effects of network packet losses that may arise during
texture and depth video transmission. Towards this end, we develop an
integrated approach that exploits the representation redundancy inherent in the
multiple streamed videos a voxel in the 3D scene visible to two captured views
is sampled and coded twice in the two views. In particular, at the receiver we
first develop an error concealment strategy that adaptively blends
corresponding pixels in the two captured views during DIBR, so that pixels from
the more reliable transmitted view are weighted more heavily. We then couple it
with a sender-side optimization of reference picture selection (RPS) during
real-time video coding, so that blocks containing samples of voxels that are
visible in both views are more error-resiliently coded in one view only, given
adaptive blending will erase errors in the other view. Further, synthesized
view distortion sensitivities to texture versus depth errors are analyzed, so
that relative importance of texture and depth code blocks can be computed for
system-wide RPS optimization. Experimental results show that the proposed
scheme can outperform the use of a traditional feedback channel by up to 0.82
dB on average at 8% packet loss rate, and by as much as 3 dB for particular
frames
Image Completion for View Synthesis Using Markov Random Fields and Efficient Belief Propagation
View synthesis is a process for generating novel views from a scene which has
been recorded with a 3-D camera setup. It has important applications in 3-D
post-production and 2-D to 3-D conversion. However, a central problem in the
generation of novel views lies in the handling of disocclusions. Background
content, which was occluded in the original view, may become unveiled in the
synthesized view. This leads to missing information in the generated view which
has to be filled in a visually plausible manner. We present an inpainting
algorithm for disocclusion filling in synthesized views based on Markov random
fields and efficient belief propagation. We compare the result to two
state-of-the-art algorithms and demonstrate a significant improvement in image
quality.Comment: Published version:
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=673843
Optimization of Occlusion-Inducing Depth Pixels in 3-D Video Coding
The optimization of occlusion-inducing depth pixels in depth map coding has
received little attention in the literature, since their associated texture
pixels are occluded in the synthesized view and their effect on the synthesized
view is considered negligible. However, the occlusion-inducing depth pixels
still need to consume the bits to be transmitted, and will induce geometry
distortion that inherently exists in the synthesized view. In this paper, we
propose an efficient depth map coding scheme specifically for the
occlusion-inducing depth pixels by using allowable depth distortions. Firstly,
we formulate a problem of minimizing the overall geometry distortion in the
occlusion subject to the bit rate constraint, for which the depth distortion is
properly adjusted within the set of allowable depth distortions that introduce
the same disparity error as the initial depth distortion. Then, we propose a
dynamic programming solution to find the optimal depth distortion vector for
the occlusion. The proposed algorithm can improve the coding efficiency without
alteration of the occlusion order. Simulation results confirm the performance
improvement compared to other existing algorithms
An Immersive Telepresence System using RGB-D Sensors and Head Mounted Display
We present a tele-immersive system that enables people to interact with each
other in a virtual world using body gestures in addition to verbal
communication. Beyond the obvious applications, including general online
conversations and gaming, we hypothesize that our proposed system would be
particularly beneficial to education by offering rich visual contents and
interactivity. One distinct feature is the integration of egocentric pose
recognition that allows participants to use their gestures to demonstrate and
manipulate virtual objects simultaneously. This functionality enables the
instructor to ef- fectively and efficiently explain and illustrate complex
concepts or sophisticated problems in an intuitive manner. The highly
interactive and flexible environment can capture and sustain more student
attention than the traditional classroom setting and, thus, delivers a
compelling experience to the students. Our main focus here is to investigate
possible solutions for the system design and implementation and devise
strategies for fast, efficient computation suitable for visual data processing
and network transmission. We describe the technique and experiments in details
and provide quantitative performance results, demonstrating our system can be
run comfortably and reliably for different application scenarios. Our
preliminary results are promising and demonstrate the potential for more
compelling directions in cyberlearning.Comment: IEEE International Symposium on Multimedia 201
Rate-Distortion Analysis of Multiview Coding in a DIBR Framework
Depth image based rendering techniques for multiview applications have been
recently introduced for efficient view generation at arbitrary camera
positions. Encoding rate control has thus to consider both texture and depth
data. Due to different structures of depth and texture images and their
different roles on the rendered views, distributing the available bit budget
between them however requires a careful analysis. Information loss due to
texture coding affects the value of pixels in synthesized views while errors in
depth information lead to shift in objects or unexpected patterns at their
boundaries. In this paper, we address the problem of efficient bit allocation
between textures and depth data of multiview video sequences. We adopt a
rate-distortion framework based on a simplified model of depth and texture
images. Our model preserves the main features of depth and texture images.
Unlike most recent solutions, our method permits to avoid rendering at encoding
time for distortion estimation so that the encoding complexity is not
augmented. In addition to this, our model is independent of the underlying
inpainting method that is used at decoder. Experiments confirm our theoretical
results and the efficiency of our rate allocation strategy
- …