20 research outputs found

    RTSP-based Mobile Peer-to-Peer Streaming System

    Get PDF
    Peer-to-peer is emerging as a potentially disruptive technology for content distribution in the mobile Internet. In addition to the already well-known peer-to-peer file sharing, real-time peer-to-peer streaming is gaining popularity. This paper presents an effective real-time peer-to-peer streaming system for the mobile environment. The basis for the system is a scalable overlay network which groups peer into clusters according to their proximity using RTT values between peers as a criteria for the cluster selection. The actual media delivery in the system is implemented using the partial RTP stream concept: the original RTP sessions related to a media delivery are split into a number of so-called partial streams according to a predefined set of parameters in such a way that it allows low-complexity reassembly of the original media session in real-time at the receiving end. Partial streams also help in utilizing the upload capacity with finer granularity than just per one original stream. This is beneficial in mobile environments where bandwidth can be scarce

    Predictive Buffering for Streaming Video in 3G Networks

    No full text
    Abstract—This paper presents a multimedia streaming service in a mobile (3G) environment that, in addition to in-band congestion signals such as packet losses and delay variations, receives congestion cues from a Network Coverage Map Service (NCMS) to make rate-control decisions. The streaming client routinely queries the NCMS to assess the network conditions at future locations along its expected path. The streaming client may ask the streaming server for short-term transmission bursts to increase pre-buffering when it is approaching areas with bad network performance to maintain media quality. If needed, the client may also switch to a different encoding rate (rate-switching) depending on the severity of expected congestion. These notifications are scheduled as late as possible, so that any changes in network conditions and/or changes in user’s movements can be taken into account (late scheduling). Using this type of geo-predictive media streaming service we show that the streaming client can provide pause-less playback and better quality of experience to the user

    Multimodal Semantics Extraction from User-Generated Videos

    Get PDF
    User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events) being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium), genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances
    corecore