2,742 research outputs found
Scalable video transcoding for mobile communications
Mobile multimedia contents have been introduced in the market and their demand is growing every day due to the increasing number of mobile devices and the possibility to watch them at any moment in any place. These multimedia contents are delivered over different networks that are visualized in mobile terminals with heterogeneous characteristics. To ensure a continuous high quality it is desirable that this multimedia content can be adapted on-the-fly to the transmission constraints and the characteristics of the mobile devices. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a technique to convert an H.264/AVC bitstream without scalability to a scalable bitstream with temporal scalability as part of a scalable video transcoder for mobile communications. The results show that when our technique is applied, the complexity is reduced by 98 % while maintaining coding efficiency
Temporal video transcoding from H.264/AVC-to-SVC for digital TV broadcasting
Mobile digital TV environments demand flexible video compression like scalable video coding (SVC) because of varying bandwidths and devices. Since existing infrastructures highly rely on H.264/AVC video compression, network providers could adapt the current H.264/AVC encoded video to SVC. This adaptation needs to be done efficiently to reduce processing power and operational cost. This paper proposes two techniques to convert an H.264/AVC bitstream in Baseline (P-pictures based) and Main Profile (B-pictures based) without scalability to a scalable bitstream with temporal scalability as part of a framework for low-complexity video adaptation for digital TV broadcasting. Our approaches are based on accelerating the interprediction, focusing on reducing the coding complexity of mode decision and motion estimation tasks of the encoder stage by using information available after the H. 264/AVC decoding stage. The results show that when our techniques are applied, the complexity is reduced by 98 % while maintaining coding efficiency
An H.264/AVC to SVC TemporalTranscoder in baseline profile: digest of technical papers
Scalable Video Coding provides temporal, spatial and quality scalability using layers within the encoded bitstream. This feature allows the encoded bitstream to be adapted to different devices and heterogeneous networks. This paper proposes a technique to convert an H.264/AVC bitstream in Baseline profile to a scalable stream which provides temporal scalability. Applying the presented approach, a reduction of 65% of coding complexity is achieved while maintaining the coding efficiency
Development and Performance Evaluation of a Connected Vehicle Application Development Platform (CVDeP)
Connected vehicle (CV) application developers need a development platform to build,
test and debug real-world CV applications, such as safety, mobility, and environmental
applications, in edge-centric cyber-physical systems. Our study objective is to develop
and evaluate a scalable and secure CV application development platform (CVDeP)
that enables application developers to build, test and debug CV applications in realtime.
CVDeP ensures that the functional requirements of the CV applications meet the
corresponding requirements imposed by the specific applications. We evaluated the
efficacy of CVDeP using two CV applications (one safety and one mobility application)
and validated them through a field experiment at the Clemson University Connected
Vehicle Testbed (CU-CVT). Analyses prove the efficacy of CVDeP, which satisfies the
functional requirements (i.e., latency and throughput) of a CV application while
maintaining scalability and security of the platform and applications
Reliable Video Streaming over mmWave with Multi Connectivity and Network Coding
The next generation of multimedia applications will require the
telecommunication networks to support a higher bitrate than today, in order to
deliver virtual reality and ultra-high quality video content to the users. Most
of the video content will be accessed from mobile devices, prompting the
provision of very high data rates by next generation (5G) cellular networks. A
possible enabler in this regard is communication at mmWave frequencies, given
the vast amount of available spectrum that can be allocated to mobile users;
however, the harsh propagation environment at such high frequencies makes it
hard to provide a reliable service. This paper presents a reliable video
streaming architecture for mmWave networks, based on multi connectivity and
network coding, and evaluates its performance using a novel combination of the
ns-3 mmWave module, real video traces and the network coding library Kodo. The
results show that it is indeed possible to reliably stream video over cellular
mmWave links, while the combination of multi connectivity and network coding
can support high video quality with low latency.Comment: To be presented at the 2018 IEEE International Conference on
Computing, Networking and Communications (ICNC), March 2018, Maui, Hawaii,
USA (invited paper). 6 pages, 4 figure
A parallel H.264/SVC encoder for high definition video conferencing
In this paper we present a video encoder specially developed and configured for high definition (HD) video conferencing. This video encoder brings together the following three requirements: H.264/Scalable Video Coding (SVC), parallel encoding on multicore platforms, and parallel-friendly rate control. With the first requirement, a minimum quality of service to every end-user receiver over Internet Protocol networks is guaranteed. With the second one, real-time execution is accomplished and, for this purpose, slice-level parallelism, for the main encoding loop, and block-level parallelism, for the upsampling and interpolation filtering processes, are combined. With the third one, a proper HD video content delivery under certain bit rate and end-to-end delay constraints is ensured. The experimental results prove that the proposed H.264/SVC video encoder is able to operate in real time over a wide range of target bit rates at the expense of reasonable losses in rate-distortion efficiency due to the frame partitioning into slices
3D video coding and transmission
The capture, transmission, and display of
3D content has gained a lot of attention in the last few
years. 3D multimedia content is no longer con fined to
cinema theatres but is being transmitted using stereoscopic
video over satellite, shared on Blu-RayTMdisks,
or sent over Internet technologies. Stereoscopic displays
are needed at the receiving end and the viewer needs to
wear special glasses to present the two versions of the
video to the human vision system that then generates
the 3D illusion. To be more e ffective and improve the
immersive experience, more views are acquired from a
larger number of cameras and presented on di fferent displays,
such as autostereoscopic and light field displays.
These multiple views, combined with depth data, also
allow enhanced user experiences and new forms of interaction
with the 3D content from virtual viewpoints.
This type of audiovisual information is represented by a
huge amount of data that needs to be compressed and
transmitted over bandwidth-limited channels. Part of
the COST Action IC1105 \3D Content Creation, Coding
and Transmission over Future Media Networks" (3DConTourNet)
focuses on this research challenge.peer-reviewe
- âŠ