128 research outputs found
DĂŒnaamiline kiiruse jaotamine interaktiivses mitmevaatelises video vaatevahetuse ennustamineses
In Interactive Multi-View Video (IMVV), the video has been captured by numbers of
cameras positioned in array and transmitted those camera views to users. The user can
interact with the transmitted video content by choosing viewpoints (views from different
cameras in the array) with the expectation of minimum transmission delay while
changing among various views. View switching delay is one of the primary concern that
is dealt in this thesis work, where the contribution is to minimize the transmission delay
of new view switch frame through a novel process of selection of the predicted view
and compression considering the transmission efficiency. Mainly considered a realtime
IMVV streaming, and the view switch is mapped as discrete Markov chain, where
the transition probability is derived using Zipf distribution, which provides information
regarding view switch prediction. To eliminate Round-Trip Time (RTT) transmission
delay, Quantization Parameters (QP) are adaptively allocated to the remaining redundant
transmitted frames to maintain view switching time minimum, trading off with
the quality of the video till RTT time-span. The experimental results of the proposed
method show superior performance on PSNR and view switching delay for better viewing quality over the existing methods
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Anchor View Allocation for Collaborative Free Viewpoint Video Streaming
In free viewpoint video, a viewer can choose at will any camera angle or the so-called "virtual view" to observe a dynamic 3-D scene, enhancing his/her depth perception. The virtual view is synthesized using texture and depth videos of two anchor camera views via depth-image-based rendering (DIBR). We consider, for the first time, collaborative live streaming of a free viewpoint video, where a group of users may interactively pull and cooperatively share streams of different anchor views. There is a cost to access the anchor views from the live source, a cost to "reconfigure" the peer network due to a change in selected anchors during view switching, and a distortion cost due to the distance of the virtual views to the received anchor views at users. We optimize the anchor views allocated to users so as to minimize the overall streaming cost given by the access cost, reconfiguration cost, and view distortion cost. We first show that, if the reconfiguration cost due to view switching is negligible, the view allocation problem can be optimally and efficiently solved in polynomial time using dynamic programming. For the case of non-negligible reconfiguration cost, the problem becomes NP-hard. We thus present a locally optimal and centralized algorithm inspired by Lloyd's algorithm used in non-uniform scalar quantization. We further propose a distributed algorithm with convergence guarantee, where each peer group independently makes merge-and-split decisions with a well-defined fairness criteria. Simulation results show that our algorithms achieve low streaming cost due to its excellent anchor view allocation
In-Network View Re-Sampling for Interactive Free Viewpoint Video Streaming
Interactive free viewpoint video offers the possibility for each user to independently choose the views of a 3D scene to be displayed at de- coder. The visual content is commonly represented by N texture and depth map pairs that capture different viewpoints. A server selects an appropriate subset of M †N views for transmission, so that the user can freely navigate in the corresponding window of viewpoints without being affected by network delay. During navigation, a user can synthesize any intermediate virtual view image in the navigation window via depth-image-based rendering (DIBR) using two nearby camera views as references. When the available bandwidth is too small for the transmission of all camera views needed to synthesize views in the navigation window, we propose to synthesize intermedi- ate virtual views as new references for transmissionâa re-sampling of viewpoints for the 3D sceneâso that the synthesized view dis- tortion within the navigation window is minimised. We formulate a combinatorial optimization to find the best set of M virtual views to synthesize as new references, and show that the problem is NP- hard. We approximate the original problem with a new reference view equivalence model and derive in this case an optimal dynamic programming algorithm to determine to best set of M views to be transmitted to each user. Experimental results show that synthesiz- ing virtual views as new references for client-side view synthesis can outperform simple selection from camera views by up to 0.73dB in synthesized view quality
- âŠ