97 research outputs found
Dynamic adaptation of streamed real-time E-learning videos over the internet
Even though the e-learning is becoming increasingly popular in the academic environment,
the quality of synchronous e-learning video is still substandard and significant work needs to be
done to improve it. The improvements have to be brought about taking into considerations both:
the network requirements and the psycho- physical aspects of the human visual system.
One of the problems of the synchronous e-learning video is that the head-and-shoulder video
of the instructor is mostly transmitted. This video presentation can be made more interesting by
transmitting shots from different angles and zooms. Unfortunately, the transmission of such
multi-shot videos will increase packet delay, jitter and other artifacts caused by frequent
changes of the scenes. To some extent these problems may be reduced by controlled reduction
of the quality of video so as to minimise uncontrolled corruption of the stream. Hence, there is a
need for controlled streaming of a multi-shot e-learning video in response to the changing
availability of the bandwidth, while utilising the available bandwidth to the maximum.
The quality of transmitted video can be improved by removing the redundant background
data and utilising the available bandwidth for sending high-resolution foreground information.
While a number of schemes exist to identify and remove the background from the foreground,
very few studies exist on the identification and separation of the two based on the understanding
of the human visual system. Research has been carried out to define foreground and background
in the context of e-learning video on the basis of human psychology. The results have been
utilised to propose methods for improving the transmission of e-learning videos.
In order to transmit the video sequence efficiently this research proposes the use of Feed-
Forward Controllers that dynamically characterise the ongoing scene and adjust the streaming
of video based on the availability of the bandwidth. In order to satisfy a number of receivers
connected by varied bandwidth links in a heterogeneous environment, the use of Multi-Layer
Feed-Forward Controller has been researched. This controller dynamically characterises the
complexity (number of Macroblocks per frame) of the ongoing video sequence and combines it
with the knowledge of availability of the bandwidth to various receivers to divide the video
sequence into layers in an optimal way before transmitting it into network.
The Single-layer Feed-Forward Controller inputs the complexity (Spatial Information and
Temporal Information) of the on-going video sequence along with the availability of bandwidth
to a receiver and adjusts the resolution and frame rate of individual scenes to transmit the
sequence optimised to give the most acceptable perceptual quality within the bandwidth
constraints.
The performance of the Feed-Forward Controllers have been evaluated under simulated
conditions and have been found to effectively regulate the streaming of real-time e-learning
videos in order to provide perceptually improved video quality within the constraints of the
available bandwidth
Scalable Video Streaming over the Internet
The objectives of this thesis are to investigate the challenges on video streaming, to explore and compare different video streaming mechanisms, and to develop video streaming algorithms that maximize visual quality. To achieve these objectives, we first investigate scalable video multicasting schemes by comparing layered video multicasting with replicated stream video multicasting. Even though it has been generally accepted that layered video multicasting is superior to replicated stream multicasting, this assumption is not based on a systematic and quantitative comparison. We argue that there are indeed scenarios where replicated stream multicasting is the preferred approach.
We also consider the problem of providing perceptually good quality of layered VBR video. This problem is challenging, because the dynamic behavior of the Internet's available bandwidth makes it difficult to provide good quality. Also a video encoded to provide a consistent quality exhibits significant data rate variability. We are, therefore, faced with the problem of accommodating the mismatch between the available bandwidth variability and the data rate variability of the encoded video. We propose an optimal quality adaptation algorithm that minimizes quality variation while at the same time increasing the utilization of the available bandwidth.
Finally, we investigate the transmission control protocol (TCP) for a transport layer protocol in streaming packetized media data. Our approach is to model a video streaming system and derive relationships under which the system employing the TCP protocol achieves desired performance. Both simulation results and the Internet experimental results validate this model and demonstrate the buffering delay requirements achieve desired video quality with high accuracy. Based on the relationships, we also develop realtime estimation algorithms of playout buffer requirements.Ph.D.Committee Chair: Mostafa H. Ammar; Committee Co-Chair: Yucel Altunbasak; Committee Member: Chuanyi Ji; Committee Member: George Riley; Committee Member: Henry Owen; Committee Member: Jack Brassi
Scalable Multiple Description Coding and Distributed Video Streaming over 3G Mobile Networks
In this thesis, a novel Scalable Multiple Description Coding (SMDC) framework is proposed. To address the bandwidth fluctuation, packet loss and heterogeneity problems in the wireless networks and further enhance the error resilience tools in Moving Pictures Experts Group 4 (MPEG-4), the joint design of layered coding (LC) and multiple description coding (MDC) is explored. It leverages a proposed distributed multimedia delivery mobile network (D-MDMN) to provide path diversity to combat streaming video outage due to handoff in Universal Mobile Telecommunications System (UMTS). The corresponding intra-RAN (Radio Access Network) handoff and inter-RAN handoff procedures in D-MDMN are studied in details, which employ the principle of video stream re-establishing to replace the principle of data forwarding in UMTS. Furthermore, a new IP (Internet Protocol) Differentiated Services (DiffServ) video marking algorithm is proposed to support the unequal error protection (UEP) of LC components of SMDC. Performance evaluation is carried through simulation using OPNET Modeler 9. 0. Simulation results show that the proposed handoff procedures in D-MDMN have better performance in terms of handoff latency, end-to-end delay and handoff scalability than that in UMTS. Performance evaluation of our proposed IP DiffServ video marking algorithm is also undertaken, which shows that it is more suitable for video streaming in IP mobile networks compared with the previously proposed DiffServ video marking algorithm (DVMA)
EMB: Efficient Multimedia Broadcast in Multi-tier Mobile Networks
Multimedia broadcast and multicast services (MBMS) in mobile networks has been widely addressed, however an investigation of such a technology in emerging, multi-tier, scenarios is still lacking. Notably, user clustering and resource allocation are extremely challenging in multi-tier networks, and imperative to maximize system capacity and improve quality of user-experience (QoE) in MBMS. Thus, in this paper we propose a clustering and resource allocation approach, named EMB, which specifically addresses heterogeneous networks and accounts for the fact that multimedia content is adaptively encoded into scalable layers depending on the QoE requirements and channel conditions of the heterogeneous users. Importantly, we prove that our clustering algorithm yields Pareto efficient broadcasting areas, multimedia encoding parameters, and re- source allocation, in a way that is also fair to the users. Fur- thermore, numerical results obtained under realistic conditions and using real-world video content, show that the proposed EMB results in lower churn count (i.e., higher number of served users), higher throughput, and increased QoE, while using fewer network resources
Video QoS/QoE over IEEE802.11n/ac: A Contemporary Survey
The demand for video applications over wireless networks has tremendously increased, and IEEE 802.11 standards have provided higher support for video transmission. However, providing Quality of Service (QoS) and Quality of Experience (QoE) for video over WLAN is still a challenge due to the error sensitivity of compressed video and dynamic channels. This thesis presents a contemporary survey study on video QoS/QoE over WLAN issues and solutions. The objective of the study is to provide an overview of the issues by conducting a background study on the video codecs and their features and characteristics, followed by studying QoS and QoE support in IEEE 802.11 standards. Since IEEE 802.11n is the current standard that is mostly deployed worldwide and IEEE 802.11ac is the upcoming standard, this survey study aims to investigate the most recent video QoS/QoE solutions based on these two standards. The solutions are divided into two broad categories, academic solutions, and vendor solutions. Academic solutions are mostly based on three main layers, namely Application, Media Access Control (MAC) and Physical (PHY) which are further divided into two major categories, single-layer solutions, and cross-layer solutions. Single-layer solutions are those which focus on a single layer to enhance the video transmission performance over WLAN. Cross-layer solutions involve two or more layers to provide a single QoS solution for video over WLAN. This thesis has also presented and technically analyzed QoS solutions by three popular vendors. This thesis concludes that single-layer solutions are not directly related to video QoS/QoE, and cross-layer solutions are performing better than single-layer solutions, but they are much more complicated and not easy to be implemented. Most vendors rely on their network infrastructure to provide QoS for multimedia applications. They have their techniques and mechanisms, but the concept of providing QoS/QoE for video is almost the same because they are using the same standards and rely on Wi-Fi Multimedia (WMM) to provide QoS
Recommended from our members
3D multiple description coding for error resilience over wireless networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Mobile communications has gained a growing interest from both customers and service providers alike in the last 1-2 decades. Visual information is used in many application domains such as remote health care, video –on demand, broadcasting, video surveillance etc. In order to enhance the visual effects of digital video content, the depth perception needs to be provided with the actual visual content. 3D video has earned a significant interest from the research community in recent years, due to the tremendous impact it leaves on viewers and its enhancement of the user’s quality of experience (QoE). In the near future, 3D video is likely to be used in most video applications, as it offers a greater sense of immersion and perceptual experience. When 3D video is compressed and transmitted over error prone channels, the associated packet loss leads to visual quality degradation. When a picture is lost or corrupted so severely that the concealment result is not acceptable, the receiver typically pauses video playback and waits for the next INTRA picture to resume decoding. Error propagation caused by employing predictive coding may degrade the video quality severely. There are several ways used to mitigate the effects of such transmission errors. One widely used technique in International Video Coding Standards is error resilience.
The motivation behind this research work is that, existing schemes for 2D colour video compression such as MPEG, JPEG and H.263 cannot be applied to 3D video content. 3D video signals contain depth as well as colour information and are bandwidth demanding, as they require the transmission of multiple high-bandwidth 3D video streams. On the other hand, the capacity of wireless channels is limited and wireless links are prone to various types of errors caused by noise, interference, fading, handoff, error burst and network congestion. Given the maximum bit rate budget to represent the 3D scene, optimal bit-rate allocation between texture and depth information rendering distortion/losses should be minimised. To mitigate the effect of these errors on the perceptual 3D video quality, error resilience video coding needs to be investigated further to offer better quality of experience (QoE) to end users.
This research work aims at enhancing the error resilience capability of compressed 3D video, when transmitted over mobile channels, using Multiple Description Coding (MDC) in order to improve better user’s quality of experience (QoE).
Furthermore, this thesis examines the sensitivity of the human visual system (HVS) when employed to view 3D video scenes. The approach used in this study is to use subjective testing in order to rate people’s perception of 3D video under error free and error prone conditions through the use of a carefully designed bespoke questionnaire.Petroleum Technology Development Fund (PTDF
- …