168 research outputs found

    Peer-to-peer stream merging for stored multimedia

    Get PDF
    In recent years, with the fast development of resource capability of both the Internet and personal computers, multimedia applications like video-on-demand (VOD) streaming have gained dramatic growth and been shown to be potential killer applications in the current and next-generation Internet. Scalable deployment of these applications has become a hot problem area due to the potentially high server and network bandwidth required in these systems.The conventional approach in a VOD streaming system dedicates a media stream for each client request, which is not scalable in a wide-area delivery system serving potentially very large numbers of clients. Recently, various efficient delivery techniques have been proposed to improve the scalability of VOD delivery systems. One approach is to use a scalable delivery protocol based on multicast, such as periodic broadcast or stream merging. These protocols have been mostly developed for single-server based systems and attempt to have each media stream serve as many clients as possible, so as to minimize the required server and network bandwidth. However, the performance improvements possible with techniques that deliver all streams from a single server are limited, especially regarding the required network bandwidth. Another approach is based on proxy caching and content replication, such as in content delivery networks (CDN). Although this approach is able to effectively distribute load across multiple CDN servers, the cost of this approach may be high.With the focus on further improving the system efficiency regarding the server and network bandwidth requirement, a new scalable streaming protocol is developed in this work. It adapts a previously proposed technique called hierarchical multicast stream merging (HMSM) to use a peer-to-peer delivery approach. To be more efficient in media delivery, the conventional early merging policy associated with HMSM is extended to be compatible with the peer-to-peer environment, and various peer selection policies are designed for initiation of media streams. The impact of limited peer resource capability is also studied in this work. In the performance study, a number of simulation experiments are conducted to evaluate the performance of the new protocol and various design policies, and promising results are reported

    Ontwerp en evaluatie van content distributie netwerken voor multimediale streaming diensten.

    Get PDF
    Traditionele Internetgebaseerde diensten voor het verspreiden van bestanden, zoals Web browsen en het versturen van e-mails, worden aangeboden via één centrale server. Meer recente netwerkdiensten zoals interactieve digitale televisie of video-op-aanvraag vereisen echter hoge kwaliteitsgaranties (QoS), zoals een lage en constante netwerkvertraging, en verbruiken een aanzienlijke hoeveelheid bandbreedte op het netwerk. Architecturen met één centrale server kunnen deze garanties moeilijk bieden en voldoen daarom niet meer aan de hoge eisen van de volgende generatie multimediatoepassingen. In dit onderzoek worden daarom nieuwe netwerkarchitecturen bestudeerd, die een dergelijke dienstkwaliteit kunnen ondersteunen. Zowel peer-to-peer mechanismes, zoals bij het uitwisselen van muziekbestanden tussen eindgebruikers, als servergebaseerde oplossingen, zoals gedistribueerde caches en content distributie netwerken (CDN's), komen aan bod. Afhankelijk van de bestudeerde dienst en de gebruikte netwerktechnologieën en -architectuur, worden gecentraliseerde algoritmen voor netwerkontwerp voorgesteld. Deze algoritmen optimaliseren de plaatsing van de servers of netwerkcaches en bepalen de nodige capaciteit van de servers en netwerklinks. De dynamische plaatsing van de aangeboden bestanden in de verschillende netwerkelementen wordt aangepast aan de heersende staat van het netwerk en aan de variërende aanvraagpatronen van de eindgebruikers. Serverselectie, herroutering van aanvragen en het verspreiden van de belasting over het hele netwerk komen hierbij ook aan bod

    Mapping Digital Media: Slovenia

    Get PDF
    The Mapping Digital Media project examines the global opportunities and risks created by the transition from traditional to digital media. Covering 60 countries, the project examines how these changes affect the core democratic service that any media system should provide: news about political, economic, and social affairs.Transition to digital broadcasting has been relatively fast and painless for Slovenia from a technical perspective, as has the spread of digital media more broadly. With the second-highest penetration of IPTV in Europe, it appears that the Slovenian population has keenly embraced new media platforms at the expense of radio, newspapers, and satellite TV. But the changes and implications for media diversity and society more broadly have stopped short of anything that could be considered a digital revolution. Key challenges remain,particularly in securing a sustainable future for the quality news sector.From a consumer and citizen's perspective, digitization has succeeded in expanding the quantity and accessibility of news and information, but not the quality and diversity of content. In combination with the lingering effects of the financial crisis, the independent performance of the media at large is under threat. This remains the over-arching challenge for policymakers

    Proactive Mechanisms for Video-on-Demand Content Delivery

    Get PDF
    Video delivery over the Internet is the dominant source of network load all over the world. Especially VoD streaming services such as YouTube, Netflix, and Amazon Video have propelled the proliferation of VoD in many peoples' everyday life. VoD allows watching video from a large quantity of content at any time and on a multitude of devices, including smart TVs, laptops, and smartphones. Studies show that many people under the age of 32 grew up with VoD services and have never subscribed to a traditional cable TV service. This shift in video consumption behavior is continuing with an ever-growing number of users. satisfy this large demand, VoD service providers usually rely on CDN, which make VoD streaming scalable by operating a geographically distributed network of several hundreds of thousands of servers. Thereby, they deliver content from locations close to the users, which keeps traffic local and enables a fast playback start. CDN experience heavy utilization during the day and are usually reactive to the user demand, which is not optimal as it leads to expensive over-provisioning, to cope with traffic peaks, and overreacting content eviction that decreases the CDN's performance. However, to sustain future VoD streaming projections with hundreds of millions of users, new approaches are required to increase the content delivery efficiency. To this end, this thesis identifies three key research areas that have the potential to address the future demand for VoD content. Our first contribution is the design of vFetch, a privacy-preserving prefetching mechanism for mobile devices. It focuses explicitly on OTT VoD providers such as YouTube. vFetch learns the user interest towards different content channels and uses these insights to prefetch content on a user terminal. To do so, it continually monitors the user behavior and the device's mobile connectivity pattern, to allow for resource-efficient download scheduling. Thereby, vFetch illustrates how personalized prefetching can reduce the mobile data volume and alleviate mobile networks by offloading peak-hour traffic. Our second contribution focuses on proactive in-network caching. To this end, we present the design of the ProCache mechanism that divides the available cache storage concerning separate content categories. Thus, the available storage is allocated to these divisions based on their contribution to the overall cache efficiency. We propose a general work-flow that emphasizes multiple categories of a mixed content workload in addition to a work-flow tailored for music video content, the dominant traffic source on YouTube. Thereby, ProCache shows how content-awareness can contribute to efficient in-network caching. Our third contribution targets the application of multicast for VoD scenarios. Many users request popular VoD content with only small differences in their playback start time which offers a potential for multicast. Therefore, we present the design of the VoDCast mechanism that leverages this potential to multicast parts of popular VoD content. Thereby, VoDCast illustrates how ISP can collaborate with CDN to coordinate on content that should be delivered by ISP-internal multicast

    Video Popularity Metrics and Bubble Cache Eviction Algorithm Analysis

    Get PDF
    Video data is the largest type of traffic in the Internet, currently responsible for over 72% of the total traffic, with over 883PB of data per month in 2016. Large scale CDN solutions are available that offer a variety of distributed hosting platforms for the purpose of transmitting video over IP. However, the IP protocol, unlike ICN protocol implementations, does not provide an any-cast architecture from which a CDN would greatly benefit. In this thesis we introduce a novel cache eviction strategy called ``Bubble,'' as well as two variants of Bubble, that can be applied to any-cast protocols to aid in optimising video delivery. Bubble, Bubble-LRU and Bubble-Insert were found to greatly reduce the quantity of video associated traffic observed in cache enabled networks. Additionally, analysis on two British Telecom (BT) provided video popularity distributions leveraging Kullback-Leibler and Pearson Chi-Squared testing methods was performed. This was done to assess which model, Zipf or Zipf-Mandelbrot, is best suited to replicate video popularity distributions and the results of these tests conclude that Zipf-Mandelbrot is the most appropriate model to replicate video popularity distributions. The work concludes that the novel cache eviction algorithms introduced in this thesis provide an efficient caching mechanism for future content delivery networks and that the modelled Zipf-Mandelbrot distribution is a better method for simulating the performance of caching algorithms

    Delivery of 360° videos in edge caching assisted wireless cellular networks

    Get PDF
    In recent years, 360° videos have become increasingly popular on commercial social platforms, and are a vital part of emerging Virtual Reality (VR) applications. However, the delivery of 360° videos requires significant bandwidth resources, which makes streaming of such data on mobile networks challenging. The bandwidth required for delivering 360° videos can be reduced by exploiting the fact that users are interested in viewing only a part of the video scene, the requested viewport. As different users may request different viewports, some parts of the 360° scenes may be more popular than others. 360° video delivery on mobile networks can be facilitated by caching popular content at edge servers, and delivering it from there to the users. However, existing edge caching schemes do not take full potential of the unequal popularity of different parts of a video, which renders them inefficient for caching 360° videos. Inspired by the above, in this thesis, we investigate how advanced 360° video coding tools, i.e., encoding into multiple quality layers and tiles, can be utilized to build more efficient wireless edge caching schemes for 360° videos. The above encoding allows the caching of only the parts of the 360° videos that are popular in high quality. To understand how edge caching schemes can benefit from 360° video coding, we compare the caching of 360° videos encoded into multiple quality layers and tiles with layer-agnostic and tile-agnostic schemes. To cope with the fact that the content popularity distribution may be unknown, we use machine learning techniques, for both Video on Demand (VoD), and live streaming scenarios. From our findings, it is clear that by taking into account the aforementioned 360° video characteristics leads to an increased performance in terms of the quality of the video delivered to the users, and the usage of the backhaul links

    Demand Reduction and Responsive Strategies for Underground Mining

    Get PDF
    This thesis presents a demand reduction and responsive strategy for underground mining operations. The thesis starts with a literature review and background research on global energy, coal mining and the energy related issues that the mining industry face everyday. The thesis then goes on to discuss underground mine electrical power systems, data acquisition, load profiling, priority ranking, load shedding and demand side management in mining. Other areas presented in this thesis are existing energy reduction techniques, including: high efficiency motors, motor speed reduction and low energy lighting. During the thesis a data acquisition system was designed and installed at a UK Coal colliery and integrated into the mines existing supervisory control and data acquisition (SCADA) system. Design and installation problems were overcome with the construction of a test meter and lab installation and testing. A detailed explanation of the system design and installation along with the data analysis of the data from the installed system. A comprehensive load profile and load characterisation system was developed by the author. The load profiling system is comprehensive allows the definition of any type of load profile. These load profiles are fixed, variable and transient load types. The loads output and electrical demand are all taken into consideration. The load characterisation system developed is also very comprehensive. The LC MATRIX is used with the load profiles and the load characteristics to define off-line schedules. A set of unique real-time decision algorithms are also developed by the author to operate the off-line schedules within the desired objective function. MATLAB Simulation is used to developed and test the systems. Results from these test are presented. Application of the developed load profiling and scheduling systems are applied to the data collected from the mine, the results of this and the cost savings are also presented

    Video streaming over the internet using application layer multicast

    Get PDF
    Multicast is a very important communication paradigm. However, the deployment of multicast at IP layer is very slow, due to development and deployment issues such as ISPs' lack of incentives to update routers and inter-operability among multicast routing protocols. Application Layer Multicast (ALM) is a good alternative, where participating peers organize themselves into a logical overlay network atop the physical links and data is \tunneled" to each other via unicast links. The distinctive feature between IP multicast and ALM is that in ALM, data replication and forwarding functionalities are performed by participating peers (a.k.a. end systems), rather than the routers in Internet Protocol (IP) multicast. This fundamental difference enables ALM to be able to circumvent the development and deployment issues of IP multicast, by exploiting the resources (e.g., CPU cycles, storage, and access bandwidth) at the edge of the network. Nevertheless, it also raises other challenges, as peers are not as stable as routers since they may join and depart the on-going session at will. In this thesis, we address some of the challenges and they are summarized as follows: First, most current P2P or ALM streaming systems are equipped with a non-scalable membership management algorithm, greatly hindering their applicability to large-scale implementations over the Internet: they either rely on a central entity to handle group membership, or simply assume that all group members are visible to each other and flooding is the main mechanism used to disseminate membership-related updates to all participating group members. This implies that they are only applicable to small groups. Second, one of ALM's prominent features, flexility, has not been fully exploited: moving the multicast functionalities from lower layer (IP layer) to higher layer (Application layer) can greatly facilitate the integration of Quality-of-Service (QoS) support. The end-to-end philosophy states that it is better to leave those functionalities to higher layers because the heterogeneity among users' requirements can be handled much better by end users, rather than the network. However, QoS, and in particular, reliability has not been thoroughly addressed in existing ALM schemes. Third, admission control algorithms are essential to the success of any ALM system, due to the fact that in ALM, each peer acts as both a client as well as a server. On the other hand, the heterogeneity among peers, in terms of their computational power, storage capacity, and access bandwidth, further complicates the design of a good admission control. Several contributions are made to address the aforementioned research challenges, and they are outlined as follows: The first contribution is a devised gossip-based membership management algorithm that is able to collect and disseminate membership-related information under high rate of churn, using relatively low communication overheads. The second contribution is a reliability-centric multicast tree construction algorithm that greatly enhance peers' perceived reliability. The third contribution is a QoS-aware tree construction algorithm that accommodates the heterogeneity among peers, such as access bandwidth, network distance, and reliability. The last contribution is the identification of the admission control problem in this overlay video streaming

    Architecture of participation : the realization of the Semantic Web, and Internet OS

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, February 2008.Includes bibliographical references (p. 65-68).The Internet and World Wide Web (WWW) is becoming an integral part of our daily life and touching every part of the society around the world including both well-developed and developing countries. The simple technology and genuine intention of the original WWW, which is to help researchers share and exchange information and data across incompatible platforms and systems, have evolved into something larger and beyond what one could conceive. While WWW has reached the critical mass, many limitations are uncovered. To address the limitations, the development of its extension, the Semantic Web, has been underway for more than five years by the inventor of WWW, Tim Berners-Lee, and the technical community. Yet, no significant impact has been made. Its awareness by the public is surprisingly and unfortunately low. This thesis will review the development effort of the Semantic Web, examine its progress which appears lagging compared to WWW, and propose a promising business model to accelerate its adoption path.by Shelley Lau.S.M

    Layer-based coding, smoothing, and scheduling of low-bit-rate video for teleconferencing over tactical ATM networks

    Get PDF
    This work investigates issues related to distribution of low bit rate video within the context of a teleconferencing application deployed over a tactical ATM network. The main objective is to develop mechanisms that support transmission of low bit rate video streams as a series of scalable layers that progressively improve quality. The hierarchical nature of the layered video stream is actively exploited along the transmission path from the sender to the recipients to facilitate transmission. A new layered coder design tailored to video teleconferencing in the tactical environment is proposed. Macroblocks selected due to scene motion are layered via subband decomposition using the fast Haar transform. A generalized layering scheme groups the subbands to form an arbitrary number of layers. As a layering scheme suitable for low motion video is unsuitable for static slides, the coder adapts the layering scheme to the video content. A suboptimal rate control mechanism that reduces the kappa dimensional rate distortion problem resulting from the use of multiple quantizers tailored to each layer to a 1 dimensional problem by creating a single rate distortion curve for the coder in terms of a suboptimal set of kappa dimensional quantizer vectors is investigated. Rate control is thus simplified into a table lookup of a codebook containing the suboptimal quantizer vectors. The rate controller is ideal for real time video and limits fluctuations in the bit stream with no corresponding visible fluctuations in perceptual quality. A traffic smoother prior to network entry is developed to increase queuing and scheduler efficiency. Three levels of smoothing are studied: frame, layer, and cell interarrival. Frame level smoothing occurs via rate control at the application. Interleaving and cell interarrival smoothing are accomplished using a leaky bucket mechanism inserted prior to the adaptation layer or within the adaptation layerhttp://www.archive.org/details/layerbasedcoding00parkLieutenant Commander, United States NavyApproved for public release; distribution is unlimited
    corecore