25 research outputs found

    A server-less architecture for building scalable, reliable, and cost-effective video-on-demand systems.

    Get PDF
    Leung Wai Tak.Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.Includes bibliographical references (leaves 58-60).Abstracts in English and Chinese.Acknowledgement --- p.IAbstract --- p.II摘要 --- p.IIIChapter Chapter 1 --- Introduction --- p.1Chapter Chapter 2 --- Related Works --- p.5Chapter 2.1 --- Previous Works --- p.5Chapter 2.2 --- Contributions of this Study --- p.7Chapter Chapter 3 --- Architecture --- p.9Chapter 3.1 --- Data Placement Policy --- p.10Chapter 3.2 --- Retrieval and Transmission Scheduling --- p.13Chapter 3.3 --- Fault Tolerance --- p.20Chapter Chapter 4 --- Performance Modeling --- p.22Chapter 4.1 --- Storage Requirement --- p.22Chapter 4.2 --- Network Bandwidth Requirement --- p.23Chapter 4.3 --- Buffer Requirement --- p.24Chapter 4.4 --- System Response Time --- p.27Chapter Chapter 5 --- System Reliability --- p.29Chapter 5.1 --- System Failure Model --- p.29Chapter 5.2 --- Minimum System Repair Capability --- p.32Chapter 5.3 --- Redundancy Configuration --- p.35Chapter Chapter 6 --- System Dimensioning --- p.37Chapter 6.1 --- Storage Capacity --- p.38Chapter 6.2 --- Network Capacity --- p.38Chapter 6.3 --- Disk Access Bandwidth --- p.39Chapter 6.4 --- Buffer Requirement --- p.41Chapter 6.5 --- System Response Time --- p.43Chapter Chapter 7 --- Multiple Parity Groups --- p.45Chapter 7.1 --- System Failure Model --- p.47Chapter 7.2 --- Buffer Requirement --- p.47Chapter 7.3 --- System Response Time --- p.49Chapter 7.4 --- Redundancy Configuration --- p.49Chapter 7.5 --- Scalability --- p.51Chapter Chapter 8 --- Conclusions and Future Works --- p.53Appendix --- p.55Chapter A. --- Derivation of the Artificial Admission Delay --- p.55Chapter B. --- Derivation of the Receiver Buffer Requirement --- p.56Bibliography --- p.5

    System designs for bulk and user-generated content delivery in the internet

    Get PDF
    This thesis proposes and evaluates new system designs to support two emerging Internet workloads: (a) bulk content, such as downloads of large media and scientific libraries, and (b) user-generated content (UGC), such as photos and videos that users share online, typically on online social networks (OSNs). Bulk content accounts for a large and growing fraction of today\u27s Internet traffic. Due to the high cost of bandwidth, delivering bulk content in the Internet is expensive. To reduce the cost of bulk transfers, I proposed traffic shaping and scheduling designs that exploit the delay-tolerant nature of bulk transfers to allow ISPs to deliver bulk content opportunistically. I evaluated my proposals through software prototypes and simulations driven by real-world traces from commercial and academic ISPs and found that they result in considerable reductions in transit costs or increased link utilization. The amount of user-generated content (UGC) that people share online has been rapidly growing in the past few years. Most users share UGC using online social networking websites (OSNs), which can impose arbitrary terms of use, privacy policies, and limitations on the content shared on their websites. To solve this problem, I evaluated the feasibility of a system that allows users to share UGC directly from the home, thus enabling them to regain control of the content that they share online. Using data from popular OSN websites and a testbed deployed in 10 households, I showed that current trends bode well for the delivery of personal UGC from users\u27 homes. I also designed and deployed Stratus, a prototype system that uses home gateways to share UGC directly from the home.Schwerpunkt dieser Doktorarbeit ist der Entwurf und die Auswertung neuer Systeme zur Unterstützung von zwei entstehenden Internet-Workloads: (a) Bulk-Content, wie zum Beispiel die Übertragung von großen Mediendateien und wissenschaftlichen Datenbanken, und (b) nutzergenerierten Inhalten, wie zum Beispiel Fotos und Videos, die Benutzer üblicherweise in sozialen Netzwerken veröffentlichen. Bulk-Content macht einen großen und weiter zunehmenden Anteil im heutigen Internetverkehr aus. Wegen der hohen Bandbreitenkosten ist die Übertragung von Bulk-Content im Internet jedoch teuer. Um diese Kosten zu senken habe ich neue Scheduling- und Traffic-Shaping-Lösungen entwickelt, die die Verzögerungsresistenz des Bulk-Verkehrs ausnutzen und es ISPs ermöglichen, Bulk-Content opportunistisch zu übermitteln. Durch Software-Prototypen und Simulationen mit Daten aus dem gewerblichen und akademischen Internet habe ich meine Lösungen ausgewertet und herausgefunden, dass sich die Übertragungskosten dadurch erheblich senken lassen und die Ausnutzung der Netze verbessern lässt. Der Anteil an nutzergenerierten Inhalten (user-generated content, UGC), die im Internet veröffentlicht wird, hat in den letzen Jahren ebenfalls schnell zugenommen. Meistens wird UGC in sozialen Netzwerken (online social networks, OSN) veröffentlicht. Dadurch sind Benutzer den willkürlichen Nutzungsbedingungen, Datenschutzrichtlinien, und Einschränkungen des OSN-Providers unterworfen. Um dieses Problem zu lösen, habe ich die Machbarkeit eines Systems ausgewertet, anhand dessen Benutzer UGC direkt von zu Hause veröffentlichen und die Kontrolle über ihren UGC zurückgewinnen können. Meine Auswertung durch Daten aus zwei populären OSN-Websites und einem Feldversuch in 10 Haushalten deutet darauf hin, dass angesichts der Fortschritte in der Bandbreite der Zugangsnetze die Veröffentlichung von persönlichem UGC von zu Hause in der nahen Zukunft möglich sein könnte.Schließlich habe ich Stratus entworfen und entwickelt, ein System, das auf Home-Gateways basiert und mit dem Benutzer UGC direkt von zu Hause veröffentlichen können

    Rule-based expert server system design for multimedia streaming transmission

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Kontteihin perustuva videoprosessointi

    Get PDF
    In recent years, the development and proliferation of mobile devices and the increasing speed of data communication have accelerated the rapid growth of video creation and consumption. Social media, for instance, has embraced video as its essential part. However, different devices and platforms with various screen resolutions, video format capabilities and data communication speeds have created new challenges for video transcoding systems. Especially, system scalability is an important aspect to ensure a proper user experience for end-users by maintaining a high rate of overall transcoding speed despite usage peaks and fluctuating system load. One way to build a scalable, rapidly deployable video transcoding service is to wrap transcoding instances into lightweight, portable containers, virtualized at the operating system level. Since containers share the kernel of the host operating system, new instances can be quickly launched when necessary. First, this thesis discusses Linux container technology, its main derivatives and related tools. Furthermore, this thesis describes various utilities that facilitate the orchestration of Linux containers but also typical video processing and internet video technologies are introduced. In order to investigate the advantages of using containers, we implemented a video transcoding service that uses application containers virtualized in CoreOS operating systems. The transcoding service is run on Amazon EC2 (Elastic Compute Cloud) instances. In addition to evaluating the service in terms of functionality, the thesis also discusses the strengths and weaknesses of the development process and use of container technologies within the scope of this project.Viime vuosina mobiililaitteiden kehittyminen ja nopea leviäminen sekä nopeutuvat tietoliikenneyhteydet ovat kiihdyttäneet videoiden luonnin ja kulutuksen ripeää kasvua. Videosta on tullut olennainen osa sosiaalista mediaa. Erilaiset laitteet ja alustat vaihtelevilla näyttöresoluutioilla, videoformaattituilla sekä tietoliikenneyhteyksien nopeuksilla ovat kuitenkin luoneet uusia haasteita videoiden prosessointiin. Erityisesti skaalautuvuus on olennainen aspekti yritettäessä varmistaa loppukäyttäjille asianmukainen käyttökokemus ylläpitämällä korkeaa prosessointinopeutta huolimatta käyttöpiikeistä ja vaihtelevasta systeemin kuormituksesta. Eräs tapa rakentaa skaalautuva, ripeästi käyttöönotettava videoiden prosessointipalvelu on paketoida prosessointi-instanssit kevytrakenteisiin, helposti liikuteltaviin kontteihin, jotka virtualisoidaan käyttöjärjestelmätasolla. Koska kontit käyttävät samaa käyttöjärjestelmän ydintä, uusia instansseja voidaan luoda tarpeen vaatiessa hyvin nopeasti. Tässä työssä esitellään Linux-kontteja ja joitakin sen johdannaisia sekä aiheeseen liittyviä työkaluja. Lisäksi erilaisia konttien orkestrointia helpottavia apuohjelmia käydään läpi, kuten myös videoprosessoinnin peruskäsitteitä ja Internetissä käytettyjä videoteknologioita. Tutkiaksemme konttien käytöstä saatavia hyötyjä toteutettiin videoiden prosessointipalvelu, joka käyttää CoreOS-käyttöjärjestelmän päälle virtualisoituja sovelluskontteja. Se rakennetaan Amazonin EC2-instanssien päälle. Palvelua ei arvioida ainoastaan toiminnallisuuden kannalta, vaan myös kehittämisvaiheen sekä konttien käytön hyviä ja huonoja puolia käsitellään

    High-performance and fault-tolerant techniques for massive data distribution in online communities

    Get PDF
    The amount of digital information produced and consumed is increasing each day. This rapid growth is motivated by the advances in computing power, hardware technologies, and the popularization of user generated content networks. New hardware is able to process larger quantities of data, which permits to obtain finer results, and as a consequence more data is generated. In this respect, scientific applications have evolved benefiting from the new hardware capabilities. This type of application is characterized by requiring large amounts of information as input, generating a significant amount of intermediate data resulting in large files. This increase not only appears in terms of volume, but also in terms of size, we need to provide methods that permit a efficient and reliable data access mechanism. Producing such a method is a challenging task due to the amount of aspects involved. However, we can leverage the knowledge found in social networks to improve the distribution process. In this respect, the advent of the Web 2.0 has popularized the concept of social network, which provides valuable knowledge about the relationships among users, and the users with the data. However, extracting the knowledge and defining ways to actively use it to increase the performance of a system remains an open research direction. Additionally, we must also take into account other existing limitations. In particular, the interconnection between different elements of the system is one of the key aspects. The availability of new technologies such as the mass-production of multicore chips, large storage media, better sensors, etc. contributed to the increase of data being produced. However, the underlying interconnection technologies have not improved with the same speed as the others. This leads to a situation where vast amounts of data can be produced and need to be consumed by a large number of geographically distributed users, but the interconnection between both ends does not match the required needs. In this thesis, we address the problem of efficient and reliable data distribution in a geographically distributed systems. In this respect, we focus on providing a solution that 1) optimizes the use of existing resources, 2) does not requires changes in the underlying interconnection, and 3) provides fault-tolerant capabilities. In order to achieve this objectives, we define a generic data distribution architecture composed of three main components: community detection module, transfer scheduling module, and distribution controller. The community detection module leverages the information found in the social network formed by the users requesting files and produces a set of virtual communities grouping entities with similar interests. The transfer scheduling module permits to produce a plan to efficiently distribute all requested files improving resource utilization. For this purpose, we model the distribution problem using linear programming and offer a method to permit a distributed solving of the problem. Finally, the distribution controller manages the distribution process using the aforementioned schedule, controls the available server infrastructure, and launches new on-demand resources when necessary

    Multimedia data capture with multicast dissemination for online distance learning

    Get PDF
    Distance Learning Environments (DLEs) are elusive to define, difficult to successfully implement and costly due to their proprietary nature. With few open-source solutions, organizations are forced to invest large amounts of their resources in the procurement and support of proprietary products. Once an organization has chosen a particular solution, it becomes prohibitively expensive to choose another path later in the development process. The resolution to these challenges is realized in the use of open-standards, non-proprietary solutions. This thesis explores the multiple definitions of DLEs, defines metrics of successful implementation and develops open-source solutions for the delivery of multimedia in the Distance Learning Environment. Through the use of the Java Media Framework API, multiple tools are created to increase the transmission, capture and availability of multimedia content. Development of this technology, through the use of case studies, leaves a legacy of lectures and knowledge on the Internet to entertain and enlighten future generations.http://archive.org/details/multimedidatcapt109456185US Navy (USN) autho

    A Content Delivery Model for Online Video

    Get PDF
    Online video accounts for a large and growing portion of all Internet traffic. In order to cut bandwidth costs, it is necessary to use the available bandwidth of users to offload video downloads. Assuming that users can only keep and distribute one video at any given time, it is necessary to determine the global user cache distribution with the goal of achieving maximum peer traffic. The system model contains three different parties: viewers, idlers and servers. Viewers are those peers who are currently viewing a video. Idlers are those peers who are currently not viewing a video but are available to upload to others. Finally, servers can upload any video to any user and has infinite capacity. Every video maintains a first-in-first-out viewer queue which contains all the viewers for that video. Each viewer downloads from the peer that arrived before it, with the earliest arriving peer downloading from the server. Thus, the server must upload to one peer whenever the viewer queue is not empty. The aim of the idlers is to act as a server for a particular video, thereby eliminating all server traffic for that video. By using the popularity of videos, the number of idlers and some assumptions on the viewer arrival process, the optimal global video distribution in the user caches can be determined
    corecore