262 research outputs found

    Media Processing in Video Conferences for Cooperating Over the Top and Operator Based Networks

    Get PDF
    Telecom operators have dominated the communication industry for a long time by providing services with guaranteed quality of service. Such services are provided by the operator at the cost of maintaining a high grade network. With the introduction of broadband and internet, many over the top (OTT) services have emerged. These services use the underlying operator networks as a mere bit pipe while all service intelligence resides in the application running on the client device. Introduction of OTT services has seen a good response from general users who are no longer bound to services provided by the network operator. This in turn has caused operators and telecom companies to loose the ownership of their customers. This thesis takes media processing in video conferencing as a case study to compare the two competing domains of operator networks and OTT networks. Both domains offer video conferencing to end users, but they follow different architectures. The study shows that OTT services can perform much better if they utilize support of the underlying network. This will also bring the user base back to the network operator. The proposal is to turn the competition into cooperation between both parties. Assessments are done from both technical as well as business perspectives to assert that such cooperative agreements are possible and should be experimented in real life

    Advanced solutions for quality-oriented multimedia broadcasting

    Get PDF
    Multimedia content is increasingly being delivered via different types of networks to viewers in a variety of locations and contexts using a variety of devices. The ubiquitous nature of multimedia services comes at a cost, however. The successful delivery of multimedia services will require overcoming numerous technological challenges many of which have a direct effect on the quality of the multimedia experience. For example, due to dynamically changing requirements and networking conditions, the delivery of multimedia content has traditionally adopted a best effort approach. However, this approach has often led to the end-user perceived quality of multimedia-based services being negatively affected. Yet the quality of multimedia content is a vital issue for the continued acceptance and proliferation of these services. Indeed, end-users are becoming increasingly quality-aware in their expectations of multimedia experience and demand an ever-widening spectrum of rich multimedia-based services. As a consequence, there is a continuous and extensive research effort, by both industry and academia, to find solutions for improving the quality of multimedia content delivered to the users; as well, international standards bodies, such as the International Telecommunication Union (ITU), are renewing their effort on the standardization of multimedia technologies. There are very different directions in which research has attempted to find solutions in order to improve the quality of the rich media content delivered over various network types. It is in this context that this special issue on broadcast multimedia quality of the IEEE Transactions on Broadcasting illustrates some of these avenues and presents some of the most significant research results obtained by various teams of researchers from many countries. This special issue provides an example, albeit inevitably limited, of the richness and breath of the current research on multimedia broadcasting services. The research i- - ssues addressed in this special issue include, among others, factors that influence user perceived quality, encoding-related quality assessment and control, transmission and coverage-based solutions and objective quality measurements

    STEER: Exploring the dynamic relationship between social information and networked media through experimentation

    Get PDF
    With the growing popularity of social networks, online video services and smart phones, the traditional content consumers are becoming the editors and broadcasters of their own stories. Within the EU FP7 project STEER, project partners have developed a novel system of new algorithms and toolsets that extract and analyse social informatics generated by social networks. Combined with advanced networking technologies, the platform creates services that offer more personalized and accurate content discovery and retrieval services. The STEER system has been deployed in multiple geographical locations during live social events such as the 2014 Winter Olympics. Our use case experiments demonstrate the feasibility and efficiency of the underlying technologies

    An autonomic delivery framework for HTTP adaptive streaming in multicast-enabled multimedia access networks

    Get PDF
    The consumption of multimedia services over HTTP-based delivery mechanisms has recently gained popularity due to their increased flexibility and reliability. Traditional broadcast TV channels are now offered over the Internet, in order to support Live TV for a broad range of consumer devices. Moreover, service providers can greatly benefit from offering external live content (e. g., YouTube, Hulu) in a managed way. Recently, HTTP Adaptive Streaming (HAS) techniques have been proposed in which video clients dynamically adapt their requested video quality level based on the current network and device state. Unlike linear TV, traditional HTTP- and HAS-based video streaming services depend on unicast sessions, leading to a network traffic load proportional to the number of multimedia consumers. In this paper we propose a novel HAS-based video delivery architecture, which features intelligent multicasting and caching in order to decrease the required bandwidth considerably in a Live TV scenario. Furthermore we discuss the autonomic selection of multicasted content to support Video on Demand (VoD) sessions. Experiments were conducted on a large scale and realistic emulation environment and compared with a traditional HAS-based media delivery setup using only unicast connections

    Advanced Videoconferencing based on WebRTC

    Full text link
    Lately, videoconference applications have experienced an evolution towards the World Wide Web. New technologies have given browsers real-time communications capabilities. In this context, WebRTC aims to provide this functionality by following and defining standards. Being a new effort, WebRTC still lacks advanced videoconferencing services such as session recording, media mixing and adjusting to varying network conditions. This paper analyzes these challenges and proposes an architecture based on a traditional communications entity, the Multipoint Control Unit or MCU as a solution

    An Improved Active Network Concept and Architecture for Distributed and Dynamic Streaming Multimedia Environments with Heterogeneous Bandwidths

    Get PDF
    A problem in todays Internet infrastructure may occur when a streaming multimedia application is to take place. The information content of video and audio signals that contain moving or changing scenes may simply be too great for Internet clients with low bandwidth capacity if no adaptation is performed. In order to satisfactorily reach clients with various bandwidth capacities some works such as receiver-driven multicast and resilient overlay networks (RON) have been developed. However these efforts mainly call for modification on router level management or place additional layer to the Internet structure, which is not recommended in the nearest future due to the highly acceptance level and widely utilization of the current Internet structure, and the lengthy and tiring standardization process for a new structure or modification to be accepted. We have developed an improved active network approach for distributed and dynamic streaming multimedia environment with heterogeneous bandwidth, such as the case of the Internet. Friendly active network system (FANS) is a sample of our approach. Adopting application level active network (ALAN) mechanism, FANS participants and available media are referred through its universal resource locator (url). The system intercepts traffic flowing from source to destination and performs media post-processing at an intermediate peer. The process is performed at the application level instead of at the router level, which was the original approach of active networks. FANS requires no changes in router level management and puts no additional requirement to the current Internet architecture and, hence, instantly applicable. In comparison with ALAN, FANS possesses two significant differences. From the system overview, ALAN requires three minimum elements: clients, servers, and dynamic proxy servers. FANS, on the other hand, unifies the functionalities of those three elements. Each of peers in FANS is a client, an intermediate peer, and a media server as well. Secondly, FANS members tracking system dynamically detects the existence of a newly joined computers or mobile device, given its url is available and announced. In ALAN, the servers and the middle nodes are priori known and, hence, static. The application level approach and better performance characteristics distinguished also our work with another similar work in this field, which uses router level approach. The approach offers, in general, the following improvements: FANS promotes QoS fairness, in which clients with lower bandwidth are accommodated and receive better quality of service FANS introduces a new algorithm to determine whether or not the involvement of intermediate peer(s) to perform media post-processing enhancement services is necessary. This mechanism is important and advantageous due to the fact that intermediate post-processing increases the delay and, therefore, should only be employed selectively. FANS considers the size of media data and the capacity of clients bandwidth as network parameters that determine the level of quality of service offered. By employing the above techniques, our experiments with the Internet emulator show that our approach improves the reliability of streaming media applications in such environment

    A Comprehensive Analysis of Swarming-based Live Streaming to Leverage Client Heterogeneity

    Full text link
    Due to missing IP multicast support on an Internet scale, over-the-top media streams are delivered with the help of overlays as used by content delivery networks and their peer-to-peer (P2P) extensions. In this context, mesh/pull-based swarming plays an important role either as pure streaming approach or in combination with tree/push mechanisms. However, the impact of realistic client populations with heterogeneous resources is not yet fully understood. In this technical report, we contribute to closing this gap by mathematically analysing the most basic scheduling mechanisms latest deadline first (LDF) and earliest deadline first (EDF) in a continuous time Markov chain framework and combining them into a simple, yet powerful, mixed strategy to leverage inherent differences in client resources. The main contributions are twofold: (1) a mathematical framework for swarming on random graphs is proposed with a focus on LDF and EDF strategies in heterogeneous scenarios; (2) a mixed strategy, named SchedMix, is proposed that leverages peer heterogeneity. The proposed strategy, SchedMix is shown to outperform the other two strategies using different abstractions: a mean-field theoretic analysis of buffer probabilities, simulations of a stochastic model on random graphs, and a full-stack implementation of a P2P streaming system.Comment: Technical report and supplementary material to http://ieeexplore.ieee.org/document/7497234

    AngelCast: cloud-based peer-assisted live streaming using optimized multi-tree construction

    Full text link
    Increasingly, commercial content providers (CPs) offer streaming solutions using peer-to-peer (P2P) architectures, which promises significant scalabil- ity by leveraging clients’ upstream capacity. A major limitation of P2P live streaming is that playout rates are constrained by clients’ upstream capac- ities – typically much lower than downstream capacities – which limit the quality of the delivered stream. To leverage P2P architectures without sacri- ficing quality, CPs must commit additional resources to complement clients’ resources. In this work, we propose a cloud-based service AngelCast that enables CPs to complement P2P streaming. By subscribing to AngelCast, a CP is able to deploy extra resources (angel), on-demand from the cloud, to maintain a desirable stream quality. Angels do not download the whole stream, nor are they in possession of it. Rather, angels only relay the minimal fraction of the stream necessary to achieve the desired quality. We provide a lower bound on the minimum angel capacity needed to maintain a desired client bit-rate, and develop a fluid model construction to achieve it. Realizing the limitations of the fluid model construction, we design a practical multi- tree construction that captures the spirit of the optimal construction, and avoids its limitations. We present a prototype implementation of AngelCast, along with experimental results confirming the feasibility of our service.Supported in part by NSF awards #0720604, #0735974, #0820138, #0952145, #1012798 #1012798 #1430145 #1414119. (0720604 - NSF; 0735974 - NSF; 0820138 - NSF; 0952145 - NSF; 1012798 - NSF; 1430145 - NSF; 1414119 - NSF
    • …
    corecore