164 research outputs found

    In-Network View Synthesis for Interactive Multiview Video Systems

    Get PDF
    To enable Interactive multiview video systems with a minimum view-switching delay, multiple camera views are sent to the users, which are used as reference images to synthesize additional virtual views via depth-image-based rendering. In practice, bandwidth constraints may however restrict the number of reference views sent to clients per time unit, which may in turn limit the quality of the synthesized viewpoints. We argue that the reference view selection should ideally be performed close to the users, and we study the problem of in-network reference view synthesis such that the navigation quality is maximized at the clients. We consider a distributed cloud network architecture where data stored in a main cloud is delivered to end users with the help of cloudlets, i.e., resource-rich proxies close to the users. In order to satisfy last-hop bandwidth constraints from the cloudlet to the users, a cloudlet re-samples viewpoints of the 3D scene into a discrete set of views (combination of received camera views and virtual views synthesized) to be used as reference for the synthesis of additional virtual views at the client. This in-network synthesis leads to better viewpoint sampling given a bandwidth constraint compared to simple selection of camera views, but it may however carry a distortion penalty in the cloudlet-synthesized reference views. We therefore cast a new reference view selection problem where the best subset of views is defined as the one minimizing the distortion over a view navigation window defined by the user under some transmission bandwidth constraints. We show that the view selection problem is NP-hard, and propose an effective polynomial time algorithm using dynamic programming to solve the optimization problem. Simulation results finally confirm the performance gain offered by virtual view synthesis in the network

    Optimized Data Representation for Interactive Multiview Navigation

    Get PDF
    In contrary to traditional media streaming services where a unique media content is delivered to different users, interactive multiview navigation applications enable users to choose their own viewpoints and freely navigate in a 3-D scene. The interactivity brings new challenges in addition to the classical rate-distortion trade-off, which considers only the compression performance and viewing quality. On the one hand, interactivity necessitates sufficient viewpoints for richer navigation; on the other hand, it requires to provide low bandwidth and delay costs for smooth navigation during view transitions. In this paper, we formally describe the novel trade-offs posed by the navigation interactivity and classical rate-distortion criterion. Based on an original formulation, we look for the optimal design of the data representation by introducing novel rate and distortion models and practical solving algorithms. Experiments show that the proposed data representation method outperforms the baseline solution by providing lower resource consumptions and higher visual quality in all navigation configurations, which certainly confirms the potential of the proposed data representation in practical interactive navigation systems

    Advanced Free Viewpoint Video Streaming Techniques

    Get PDF
    Free-viewpoint video is a new type of interactive multimedia service allowing users to control their viewpoint and generate new views of a dynamic scene from any perspective. The uniquely generated and displayed views are composed from two or more high bitrate camera streams that must be delivered to the users depending on their continuously changing perspective. Due to significant network and computational resource requirements, we proposed scalable viewpoint generation and delivery schemes based on multicast forwarding and distributed approach. Our aim was to find the optimal deployment locations of the distributed viewpoint synthesis processes in the network topology by allowing network nodes to act as proxy servers with caching and viewpoint synthesis functionalities. Moreover, a predictive multicast group management scheme was introduced in order to provide all camera views that may be requested in the near future and prevent the viewpoint synthesizer algorithm from remaining without camera streams. The obtained results showed that even 42% traffic decrease can be realized using distributed viewpoint synthesis and the probability of viewpoint synthesis starvation can be also significantly reduced in future free viewpoint video services

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames

    Anchor View Allocation for Collaborative Free Viewpoint Video Streaming

    Get PDF
    In free viewpoint video, a viewer can choose at will any camera angle or the so-called "virtual view" to observe a dynamic 3-D scene, enhancing his/her depth perception. The virtual view is synthesized using texture and depth videos of two anchor camera views via depth-image-based rendering (DIBR). We consider, for the first time, collaborative live streaming of a free viewpoint video, where a group of users may interactively pull and cooperatively share streams of different anchor views. There is a cost to access the anchor views from the live source, a cost to "reconfigure" the peer network due to a change in selected anchors during view switching, and a distortion cost due to the distance of the virtual views to the received anchor views at users. We optimize the anchor views allocated to users so as to minimize the overall streaming cost given by the access cost, reconfiguration cost, and view distortion cost. We first show that, if the reconfiguration cost due to view switching is negligible, the view allocation problem can be optimally and efficiently solved in polynomial time using dynamic programming. For the case of non-negligible reconfiguration cost, the problem becomes NP-hard. We thus present a locally optimal and centralized algorithm inspired by Lloyd's algorithm used in non-uniform scalar quantization. We further propose a distributed algorithm with convergence guarantee, where each peer group independently makes merge-and-split decisions with a well-defined fairness criteria. Simulation results show that our algorithms achieve low streaming cost due to its excellent anchor view allocation

    Optimized Camera Handover Scheme in Free Viewpoint Video Streaming

    Get PDF
    Free-viewpoint video (FVV) is a promising approach that allows users to control their viewpoint and generate virtual views from any desired perspective. The individual user viewpoints are synthetized from two or more camera streams and correspondent depth sequences. In case of continuous viewpoint changes, the camera inputs of the view synthesis process must be changed in a seamless way, in order to avoid the starvation of the viewpoint synthesizer algorithm. Starvation occurs when the desired user viewpoint cannot be synthetized with the currently streamed camera views, thus the FVV playout interrupts. In this paper we proposed three camera handover schemes (TCC, MA, SA) based on viewpoint prediction in order to minimize the probability of playout stalls and find the tradeoff between the image quality and the camera handover frequency. Our simulation results show that the introduced camera switching methods can reduce the handover frequency with more than 40%, hence the viewpoint synthesis starvation and the playout interruption can be minimized. By providing seamless viewpoint changes, the quality of experience can be significantly improved, making the new FVV service more attractive in the future

    Crowdsourced multi-view live video streaming using cloud computing

    Get PDF
    Advances and commoditization of media generation devices enable capturing and sharing of any special event by multiple attendees. We propose a novel system to collect individual video streams (views) captured for the same event by multiple attendees, and combine them into multi-view videos, where viewers can watch the event from various angles, taking crowdsourced media streaming to a new immersive level. The proposed system is called Cloud-based Multi-View Crowdsourced Streaming (CMVCS), and it delivers multiple views of an event to viewers at the best possible video representation based on each viewer's available bandwidth. The CMVCS is a complex system having many research challenges. In this paper, we focus on resource allocation of the CMVCS system. The objective of the study is to maximize the overall viewer satisfaction by allocating available resources to transcode views in an optimal set of representations, subject to computational and bandwidth constraints. We choose the video representation set to maximize QoE using Mixed Integer Programming. Moreover, we propose a Fairness-Based Representation Selection (FBRS) heuristic algorithm to solve the resource allocation problem efficiently. We compare our results with optimal and Top-N strategies. The simulation results demonstrate that FBRS generates near optimal results and outperforms the state-of-the-art Top-N policy, which is used by a large-scale system (Twitch).This work was supported by NPRP through the Qatar National Research Fund (a member of Qatar Foundation) under Grant 8-519-1-108.Scopu
    • 

    corecore