37,681 research outputs found

    Delivering Live Multimedia Streams to Mobile Hosts in a Wireless Internet with Multiple Content Aggregators

    Get PDF
    We consider the distribution of channels of live multimedia content (e.g., radio or TV broadcasts) via multiple content aggregators. In our work, an aggregator receives channels from content sources and redistributes them to a potentially large number of mobile hosts. Each aggregator can offer a channel in various configurations to cater for different wireless links, mobile hosts, and user preferences. As a result, a mobile host can generally choose from different configurations of the same channel offered by multiple alternative aggregators, which may be available through different interfaces (e.g., in a hotspot). A mobile host may need to handoff to another aggregator once it receives a channel. To prevent service disruption, a mobile host may for instance need to handoff to another aggregator when it leaves the subnets that make up its current aggregatorïżœs service area (e.g., a hotspot or a cellular network).\ud In this paper, we present the design of a system that enables (multi-homed) mobile hosts to seamlessly handoff from one aggregator to another so that they can continue to receive a channel wherever they go. We concentrate on handoffs between aggregators as a result of a mobile host crossing a subnet boundary. As part of the system, we discuss a lightweight application-level protocol that enables mobile hosts to select the aggregator that provides the ïżœbestïżœ configuration of a channel. The protocol comes into play when a mobile host begins to receive a channel and when it crosses a subnet boundary while receiving the channel. We show how our protocol can be implemented using the standard IETF session control and description protocols SIP and SDP. The implementation combines SIP and SDPïżœs offer-answer model in a novel way

    Systems for the Nineties - Distributed Multimedia Systems

    Get PDF
    We live at the dawn of the information age. The capabilities of computers to store and look up information are only just beginning to be exploited. As little as ten years ago, practically all the information stored in computers was entered and retrieved in the form of text. Today, we are just starting to use other means of communicating information between people and machines -- computers can now scan images, they can record sound, they can produce synthesized speech, and they can show two- and three-dimensional images of spatial data. The realization that we are still at the beginning of the information age comes when we notice the vast difference between the way in which people interact with each other and the way in which people can interact with (or through) machines. When people communicate, they tend to use speech, gestures, touch, even smell; they draw pictures on the white board, they use text, pictures, photos, graphs, sometimes even video presentations. nterpersonal communication is truly multimedia communication in that it makes use of all our senses

    Telematics programme (1991-1994). EUR 15402 EN

    Get PDF

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression

    Get PDF
    In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield competitive results. In this paper, we present a novel no-reference bitstream-based objective video quality metric that is constructed by genetic programming-based symbolic regression. A key benefit of this approach is that it calculates reliable white-box models that allow us to determine the importance of the parameters. Additionally, these models can provide human insight into the underlying principles of subjective video quality assessment. Numerical results show that perceived quality can be modeled with high accuracy using only parameters extracted from the received video bitstream
    • 

    corecore