57 research outputs found
Signal Processing Challenges in Distributed Stream Processing Systems
Distributed stream processing represents a novel computing paradigm where data, sensed externally and possibly preprocessed, is pushed asynchronously to various connected computing devices with heterogeneous capabilities for processing. It enables novel applications typically characterized by the need to process high-volume data streams in a timely and responsive fashion. Some example applications include sensor networks, location-tracking services, distributed speech recognition, and network management. Recent work in large-scale distributed stream processing tackle various research challenges in both the application domain as well as in the underlying system. The main focus of this paper is to highlight some of the signal processing challenges such a novel computing framework brings. We first briefly introduce the main concepts behind distributed stream processing. Then we define the notion of relevant information from two related information-theoretic approaches. Finally, we browse existing techniques for sensing and quantizing the information given the set of classification, detection and estimation tasks, which we refer to as task-driven signal processing. We also address some of the related unexplored research challenges
Securing Media for Adaptive Streaming
This paper describes the ARMS system which enables secure and adaptive rich media streaming to a large-scale, heterogeneous client population. The secure streaming algorithms ensure end-to-end security while the content is adapted and streamed via intermediate, potentially untrusted servers. ARMS streaming is completely standards compliant and to our knowledge is the first such end-to-end MPEG-4-based system
Optimal Proxy Management for Multimedia Streaming in Content Distribution Networks
The widespread use of the Internet and the maturing of digital video technology have led to an increase in various streaming media applications. As broadband to the home becomes more prevalent, the bottleneck of delivering quality streaming media is shifting upstream to the backbone, peering links, and the best-effort Internet. In this paper, we address the problem of efficiently streaming video assets to the end clients over a distributed infrastructure consisting of origin servers and proxy caches. We build on earlier work and propose a unified mathematical framework under which various server scheduling and proxy cache management algorithms for video streaming can be analyzed. More precisely, we incorporate known server scheduling algorithms (batching/patching/batchpatching) and proxy caching algorithms (full/partial/no caching with or without caching patch bytes) in our framework and analyze the minimum backbone bandwidth consumption under the optimal joint scheduling and caching strategies. We start by studying the optimal policy for streaming a single video object and derive a simple heuristic to enable management of multiple heterogeneous videos efficiently. We then show that the performance of our heuristic is close to that of the optimal scheme, under a wide range of parameters
Joint Server Scheduling and Proxy Caching for Video Delivery
We consider the delivery of video assets over a best-effort network, possibly through a caching proxy located close to the clients generating the requests. We are interested in the joint server scheduling and prefix/partial caching strategy that minimizes the aggregate transmission rate over the backbone network (i.e., average output server rate) under a cache of given capacity. We present multiple schemes to address various service levels and client resources by enabling bandwidth and cache space tradeoffs. We also propose an optimization algorithm selecting the working set of asset prefixes. We detail algorithms for practical implementation of our schemes. Simulation results show our scheme dramatically outperforms the full caching technique
Recommended from our members
Defining user perception of distributed multimedia quality
This article presents the results of a study that explored the human side of the multimedia experience. We propose a model that assesses quality variation from three distinct levels: the network, the media and the content levels; and from two views: the technical and the user perspective. By facilitating parameter variation at each of the quality levels and from each of the perspectives, we were able to examine their impact on user quality perception. Results show that a significant reduction in frame rate does not proportionally reduce the user's understanding of the presentation independent of technical parameters, that multimedia content type significantly impacts user information assimilation, user level of enjoyment, and user perception of quality, and that the device display type impacts user information assimilation and user perception of quality. Finally, to ensure the transfer of information, low-level abstraction (network-level) parameters, such as delay and jitter, should be adapted; to maintain the user's level of enjoyment, high-level abstraction quality parameters (content-level), such as the appropriate use of display screens, should be adapted
- âŠ