46 research outputs found

    P2P Media Streaming with HTML5 and WebRTC

    Get PDF
    Abstract-Video-on-demand (VoD) services, such as YouTube, generate most of the Internet traffic today, and the popularity of video services is growing. Service and CDN providers have to invest more and more in distribution networks, which creates pressure to innovate novel approaches. Peer-to-peer (P2P) streaming is a viable alternative that is scalable and can meet the increasing demand. The emerging HTML5 standard introduces APIs that give web browsers an ability to communicate directly with each other in real time. New standards also enable a setup, where browsers can act as P2P nodes. This paper reviews whether the new HTML5 and WebRTC standards are a fit for P2P video streaming, evaluates the performance challenges and proposes solutions. Preliminary analysis indicates that HTML5 can be applied to VoD, but there are concerns

    Hajautettu sisällönjakelu videonjakopalvelussa

    Get PDF
    Ihmisten käyttäytyminen internetissä voi määräytyä hyvin sattumanvaraisesti. Esimerkiksi sosiaalisen median kautta uudet ja kiinnostavat asiat voivat levitä hyvinkin nopeasti usealle eri ihmiselle. Jos sisällönjakaja ei ole varautunut, tai pystynyt varautumaan esimerkiksi rahallisista syistä, piikkeihin kysynnässä, niin odottamaton kiinnostus voi vaikuttaa palvelun laatuun tai jopa estää palvelun kokonaan. Esimerkkinä tilanteesta, jossa sisällön jakaja ei pysty täysin varautumaan äkkinäiseen resurssitarpeeseen, otetaan pieni videonjakopalvelu. Palvelua käyttää pieni vakiokäyttäjien joukko, jonka palveluun tarvittavan resurssimäärän yksittäinen palvelin pystyy hyvin tarjoamaan. Jos tällaisessa tilanteessa yksittäinen video menee viraaliksi sosiaalisessa mediassa ja käyttäjäkunta nousee hetkellisesti siihenastisesta maksimaalisesta ruuhkahuipusta 10-1000 -kertaiseksi, voi palvelin kaatua kokonaan tai ainakaan se ei pysty tarjoamaan ainoallekaan käyttäjälle palvelua. Jopa perinteinen sisällönjakeluverkko (CDN, content distribution network) auttaa huonosti tilanteeseen, jossa syntynyt salamajoukko (flash crowd) on vahvasti paikkaan sidottu ilmiö. Sisällönjakeluverkossa tieto jakaantuu kyllä suuremmalle maantieteelliselle alueelle kuin vain yksittäiseen keskuskoneeseen, mutta kun tarvitaan paljon kapasiteettia pienemmässä, vahvasti lokaalissa, paikallisessa osassa ei sisällönjakeluverkko olekaan enää paras ratkaisu ongelmaan. Tässä pro gradu -tutkielmassa keskitytään tutkimaan miten hyvin rajatuilla resursseilla varustettu palveluntarjoaja pystyisi rakentamaan suorituskykyisen videonjakopalvelun hajautetulla sisällönjakelulla. Samalla varmistetaan, että mahdollisen salamajoukon aiheuttaman käyttäjäpiikin aiheuttama ongelma ei johda palvelun kaatumiseen. Ratkaisuvaihtoehdoissa keskitytään käyttäjille hajautettuun ratkaisuun, koska palvelinkapasiteettia nimenomaan ei haluta lisätä. Jos pienemmillä resursseilla varustetut yksityishenkilöt voisivat ylläpitää omia palveluitaan halvalla ratkaisulla, saavutettaisiin monia etuja. Näitä etuja ovat muun muassa mahdollisen sensuurin väheneminen, yksityisyyden suoja, kilpailun mahdollistaminen, sopeutumiskyky ja virheiden sietokyky

    The Internet Ecosystem: The Potential for Discrimination

    Get PDF
    Symposium: Rough Consensus and Running Code: Integrating Engineering Principles into Internet Policy Debates, held at the University of Pennsylvania\u27s Center for Technology Innovation and Competition on May 6-7, 2010. This Article explores how the emerging Internet architecture of cloud computing, content distribution networks, private peering and data-center services can simultaneously foster a perception of unfair network access while at the same time enabling significant competition for services, content, and innovation. A key enabler of these changes is the emergence of technologies that lower the barrier for entry in developing and deploying new services. Another is the design of successful Internet applications, which already accommodate the variation in service afforded by the current Internet. Regulators should be aware of the potential for anti-competitive practices in this broader Internet Ecosystem, but should carefully consider the effects of regulation on that ecosystem

    A Comparative Evaluation of Current HTML5 Web Video Implementations

    Get PDF
    HTML5 video is the upcoming standard for playing videos on the World Wide Web. Although its specification has not been fully adopted yet, all major browsers provide the HTML5 video element and web developers already rely on its functionality. But there are differences between implementations and inaccuracies that trouble the web developer community. To help to improve the current situation we draw a comparison between the most important web browsers. We focus on the event mechanism, since it is essential for interacting with the video element. Furthermore, we compare the seeking accuracy, which is relevant for more specialized applications. Our tests reveal varieties of differences between browser interfaces and show that even simple software solutions may still need third-party plugins in today's browsers

    Internet Daemons: Digital Communications Possessed

    Get PDF
    We’re used to talking about how tech giants like Google, Facebook, and Amazon rule the internet, but what about daemons? Ubiquitous programs that have colonized the Net’s infrastructure—as well as the devices we use to access it—daemons are little known. Fenwick McKelvey weaves together history, theory, and policy to give a full account of where daemons come from and how they influence our lives—including their role in hot-button issues like network neutrality. Going back to Victorian times and the popular thought experiment Maxwell’s Demon, McKelvey charts how daemons evolved from concept to reality, eventually blossoming into the pandaemonium of code-based creatures that today orchestrates our internet. Digging into real-life examples like sluggish connection speeds, Comcast’s efforts to control peer-to-peer networking, and Pirate Bay’s attempts to elude daemonic control (and skirt copyright), McKelvey shows how daemons have been central to the internet, greatly influencing everyday users. Internet Daemons asks important questions about how much control is being handed over to these automated, autonomous programs, and the consequences for transparency and oversight. Table of Contents Abbreviations and Technical Terms Introduction 1. The Devil We Know: Maxwell’s Demon, Cyborg Sciences, and Flow Control 2. Possessing Infrastructure: Nonsynchronous Communication, IMPs, and Optimization 3. IMPs, OLIVERs, and Gateways: Internetworking before the Internet 4. Pandaemonium: The Internet as Daemons 5. Suffering from Buffering? Affects of Flow Control 6. The Disoptimized: The Ambiguous Tactics of the Pirate Bay 7. A Crescendo of Online Interactive Debugging? Gamers, Publics and Daemons Conclusion Acknowledgments Appendix: Internet Measurement and Mediators Notes Bibliography Index Reviews Beneath social media, beneath search, Internet Daemons reveals another layer of algorithms: deeper, burrowed into information networks. Fenwick McKelvey is the best kind of intellectual spelunker, taking us deep into the infrastructure and shining his light on these obscure but vital mechanisms. What he has delivered is a precise and provocative rethinking of how to conceive of power in and among networks. —Tarleton Gillespie, author of Custodians of the Internet Internet Daemons is an original and important contribution to the field of digital media studies. Fenwick McKelvey extensively maps and analyzes how daemons influence data exchanges across Internet infrastructures. This study insightfully demonstrates how daemons are transformative entities that enable particular ways of transferring information and connecting up communication, with significant social and political consequences. —Jennifer Gabrys, author of Program Eart

    Web Distributed Computing Systems

    Get PDF
    The thesis presents the PhD study about a new approach in distributed computing based on the exploitation of web browsers as clents, using technologies and best practices of Javascript, AJAX and Flex. The described solution has two main advantages: it is client free, so no additional programs have to be installed to perform the computation, and it requires low CPU usage, so clientside computation is no invasive for users. The solution is developed with both AJAX and Adobe® Flex® technologies embedding a pseudoclient into a web page that hosts the computation in the form of a banner. While users browse the hosting web page, client side of the system query the server side part for a subproblem, called crunch, computes the solution(s) and sends back it to the server. All the process is always transparent for the users navigation experience and computer use in general. The thesis shows the feasibility of the system and the good performances that can be achieved, with details over tests and metrics that have been defined to measure the performance indexes. The new architecture has been tested through this performance metrics by implementing two examples of distributed computing, the cracking of the RSA cryptosystem through the factorization of the public key and the Pearson's correlation index between smples in genetic data sets. Results have shown good feasibility of this approach both in a closed environment and also in an Internet environment, in a typical real situation. A mathematical model has been developed over this solution. The main goals of the model are to describe and classify different categories of problems on the basis of the feasibility and o find the limits in the dimensioning of the scheduling systems to have convenience in the use of this approach

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    Issues and Evolution of the Chinese Copyright Law facing Digital Environment in a Comparative Law Perspective (US and EU)

    Get PDF
    En Chine, la protection du droit d’auteur dans l’environnement numérique est un problème au niveau international et national. Pourquoi le droit d’auteur ne peut-il pas être protégé correctement ? Quels sont les droits et les outils mis à la disposition des auteurs ? Sous la pression de la rétorsion commerciale des États-Unis, la Chine a ratifié la Convention de Berne en 1992. Le premier droit d’auteur en Chine et les deux révisions avaient principalement pour but de se conformer à la Convention de Berne. Autrement dit, le droit d’auteur chinois est artificiel. Il ne représente pas la réconciliation de conflits d'intérêts différents. Les actions de la mise en œuvre du droit d’auteur en environnement numérique ont été entreprises par les autorités chinoises. Elles pourraient être très efficaces. Des sites Internet illégaux sont contrôlés et le contenu qui atteint au droit d’auteur est supprimé. Néanmoins, les actions pourraient être excessives. L’environnement numérique a non seulement augmenté la capacité individuelle de la reproduction et la transmission des œuvres, mais a aussi changé la façon dont les œuvres peuvent être créées. Comment protéger les droits d’auteur existants, d’un côté, et stimuler la créativité individuelle des internautes, d’un autre côté ?Chinese copyright protection in the digital environment has been a problem at both international and national level. Why Chinese copyright could not be properly protected?What rights and enforcement tools the copyright holders have? Under the pressure of the US trade retaliation, China ratified the Berne Convention in 1992. The first Chinese Copyright Law and the later two revisions were mainly for the purpose of complying with the Berne Convention. In other words, the Chinese Copyright Law is artificial. It is not the reconciliation of the conflicts of different interests. Copyright enforcement actions have been undertaken by the Chinese copyright authorities in the digital environment. They could be very efficient. Major pirating websites are seized and enormous infringing contents are taken down. However, the actions could also be excessive. The digital environment not only boosted the individual capacity of the reproduction and transmission of works, but also changed the way of how works could be created. How to protect the existing copyright on the one hand, to simulate the individual user’s creativity, on the other

    Research into Human Rights Protocol Considerations

    Full text link

    Issues and Evolution of the Chinese Copyright Law facing Digital Environment in a Comparative Law Perspective (US and EU)

    Get PDF
    En Chine, la protection du droit d’auteur dans l’environnement numérique est un problème au niveau international et national. Pourquoi le droit d’auteur ne peut-il pas être protégé correctement ? Quels sont les droits et les outils mis à la disposition des auteurs ? Sous la pression de la rétorsion commerciale des États-Unis, la Chine a ratifié la Convention de Berne en 1992. Le premier droit d’auteur en Chine et les deux révisions avaient principalement pour but de se conformer à la Convention de Berne. Autrement dit, le droit d’auteur chinois est artificiel. Il ne représente pas la réconciliation de conflits d'intérêts différents. Les actions de la mise en œuvre du droit d’auteur en environnement numérique ont été entreprises par les autorités chinoises. Elles pourraient être très efficaces. Des sites Internet illégaux sont contrôlés et le contenu qui atteint au droit d’auteur est supprimé. Néanmoins, les actions pourraient être excessives. L’environnement numérique a non seulement augmenté la capacité individuelle de la reproduction et la transmission des œuvres, mais a aussi changé la façon dont les œuvres peuvent être créées. Comment protéger les droits d’auteur existants, d’un côté, et stimuler la créativité individuelle des internautes, d’un autre côté ?Chinese copyright protection in the digital environment has been a problem at both international and national level. Why Chinese copyright could not be properly protected?What rights and enforcement tools the copyright holders have? Under the pressure of the US trade retaliation, China ratified the Berne Convention in 1992. The first Chinese Copyright Law and the later two revisions were mainly for the purpose of complying with the Berne Convention. In other words, the Chinese Copyright Law is artificial. It is not the reconciliation of the conflicts of different interests. Copyright enforcement actions have been undertaken by the Chinese copyright authorities in the digital environment. They could be very efficient. Major pirating websites are seized and enormous infringing contents are taken down. However, the actions could also be excessive. The digital environment not only boosted the individual capacity of the reproduction and transmission of works, but also changed the way of how works could be created. How to protect the existing copyright on the one hand, to simulate the individual user’s creativity, on the other
    corecore