5 research outputs found

    Distribuição de conteúdos multimédia na Web/P2P : SeedSeer

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaDesde a criação da Internet que existem inumeras formas de partilhar ficheiros, mas até ao dia de hoje é discutível se alguma possa ser considerada a melhor. A apetência do público em geral para conteúdo multimedia levou ao aparecimento de novas plataformas de distribuição de conteúdo como o Google Play, Netflix, Apple Store, entre outros. Estes conteúdos são distribuídos de forma centralizada e levam a grandes custos de infra-estrutura para essas entidades. Por outro lado, as redes P2P permitem a distribuição de conteúdos de forma descentralizada e com baixos custos, estes contudo, exigem aplicações específicas e conhecimentos técnicos, o que se torna uma barreira entre o consumidor e os conteúdos que estão disponíveis nestas plataformas. Nesta tese é desenvolvido um protótipo de uma nova solução, usando novos standards HTML5 como WebSockets e WebRTC para introduzir uma nova perspectiva de como os utilizadores podem partilhar e consumir conteúdo. Em termos simples, a abordagem desta tese procura trazer a rede BitTorrent para os Browsers usando apenas javascript, tirando partido da sua facilidade de utilização por não exigir qualquer tipo de instalação necessária. Usando WebRTC esta tese foca-se em como fazer crescer a rede dos Browsers de forma descentralizada, incentivando o consumo de conteúdo em comunidades de utilizadores num esforço para aumentar a privacidade e resistência à censura, assim como mitigar limitações de escala da solução. Os resultados deste trabalho demonstram que alguns conceitos utilizados nesta tese têm vantagens únicas que são relevantes para o público em geral, no entanto, estas vêm com o custo de algumas limitações que são inerentes e devem ser mitigadas.Since the inception of the Internet there are a lot of ways to share files, but still to this day it is arguable if there’s a best one. The palatability of the general public for multimedia content created the need for new platforms of content distribution like Google Play, Netflix, Apple Store and some others. Contents that are distributed in a centralized way and that lead to great infrastructure costs to these entities. On the other hand, P2P networks allow the distribution of content in a decentralized way with low costs, these however require specific applications and technical knowledge, which is a barrier between the consumer and the contents that are available in these platforms. In this thesis a prototype of a new solution is developed, using upcoming HTML5 standards like WebSockets and WebRTC to introduce a new perspective to how users can share and consume content. In simple terms, the approach of this thesis is to bring the BitTorrent network into the browsers using only javascript, taking advantage of its ease of use by not requiring any kind installation. Using WebRTC this thesis focused in how to grow the browser’s network while being decentralized, encouraging content consumption in communities of users in an effort to increase privacy and resilience to censorship as well as mitigate scaling limitations of the solution. Results of this research demonstrate that some concepts used in this thesis have unique advantages that are relevant to the general public, however they come at the cost of some inherent limitations that should be mitigated

    Hajautettu sisällönjakelu videonjakopalvelussa

    Get PDF
    Ihmisten käyttäytyminen internetissä voi määräytyä hyvin sattumanvaraisesti. Esimerkiksi sosiaalisen median kautta uudet ja kiinnostavat asiat voivat levitä hyvinkin nopeasti usealle eri ihmiselle. Jos sisällönjakaja ei ole varautunut, tai pystynyt varautumaan esimerkiksi rahallisista syistä, piikkeihin kysynnässä, niin odottamaton kiinnostus voi vaikuttaa palvelun laatuun tai jopa estää palvelun kokonaan. Esimerkkinä tilanteesta, jossa sisällön jakaja ei pysty täysin varautumaan äkkinäiseen resurssitarpeeseen, otetaan pieni videonjakopalvelu. Palvelua käyttää pieni vakiokäyttäjien joukko, jonka palveluun tarvittavan resurssimäärän yksittäinen palvelin pystyy hyvin tarjoamaan. Jos tällaisessa tilanteessa yksittäinen video menee viraaliksi sosiaalisessa mediassa ja käyttäjäkunta nousee hetkellisesti siihenastisesta maksimaalisesta ruuhkahuipusta 10-1000 -kertaiseksi, voi palvelin kaatua kokonaan tai ainakaan se ei pysty tarjoamaan ainoallekaan käyttäjälle palvelua. Jopa perinteinen sisällönjakeluverkko (CDN, content distribution network) auttaa huonosti tilanteeseen, jossa syntynyt salamajoukko (flash crowd) on vahvasti paikkaan sidottu ilmiö. Sisällönjakeluverkossa tieto jakaantuu kyllä suuremmalle maantieteelliselle alueelle kuin vain yksittäiseen keskuskoneeseen, mutta kun tarvitaan paljon kapasiteettia pienemmässä, vahvasti lokaalissa, paikallisessa osassa ei sisällönjakeluverkko olekaan enää paras ratkaisu ongelmaan. Tässä pro gradu -tutkielmassa keskitytään tutkimaan miten hyvin rajatuilla resursseilla varustettu palveluntarjoaja pystyisi rakentamaan suorituskykyisen videonjakopalvelun hajautetulla sisällönjakelulla. Samalla varmistetaan, että mahdollisen salamajoukon aiheuttaman käyttäjäpiikin aiheuttama ongelma ei johda palvelun kaatumiseen. Ratkaisuvaihtoehdoissa keskitytään käyttäjille hajautettuun ratkaisuun, koska palvelinkapasiteettia nimenomaan ei haluta lisätä. Jos pienemmillä resursseilla varustetut yksityishenkilöt voisivat ylläpitää omia palveluitaan halvalla ratkaisulla, saavutettaisiin monia etuja. Näitä etuja ovat muun muassa mahdollisen sensuurin väheneminen, yksityisyyden suoja, kilpailun mahdollistaminen, sopeutumiskyky ja virheiden sietokyky

    Computational Resource Abuse in Web Applications

    Get PDF
    Internet browsers include Application Programming Interfaces (APIs) to support Web applications that require complex functionality, e.g., to let end users watch videos, make phone calls, and play video games. Meanwhile, many Web applications employ the browser APIs to rely on the user's hardware to execute intensive computation, access the Graphics Processing Unit (GPU), use persistent storage, and establish network connections. However, providing access to the system's computational resources, i.e., processing, storage, and networking, through the browser creates an opportunity for attackers to abuse resources. Principally, the problem occurs when an attacker compromises a Web site and includes malicious code to abuse its visitor's computational resources. For example, an attacker can abuse the user's system networking capabilities to perform a Denial of Service (DoS) attack against third parties. What is more, computational resource abuse has not received widespread attention from the Web security community because most of the current specifications are focused on content and session properties such as isolation, confidentiality, and integrity. Our primary goal is to study computational resource abuse and to advance the state of the art by providing a general attacker model, multiple case studies, a thorough analysis of available security mechanisms, and a new detection mechanism. To this end, we implemented and evaluated three scenarios where attackers use multiple browser APIs to abuse networking, local storage, and computation. Further, depending on the scenario, an attacker can use browsers to perform Denial of Service against third-party Web sites, create a network of browsers to store and distribute arbitrary data, or use browsers to establish anonymous connections similarly to The Onion Router (Tor). Our analysis also includes a real-life resource abuse case found in the wild, i.e., CryptoJacking, where thousands of Web sites forced their visitors to perform crypto-currency mining without their consent. In the general case, attacks presented in this thesis share the attacker model and two key characteristics: 1) the browser's end user remains oblivious to the attack, and 2) an attacker has to invest little resources in comparison to the resources he obtains. In addition to the attack's analysis, we present how existing, and upcoming, security enforcement mechanisms from Web security can hinder an attacker and their drawbacks. Moreover, we propose a novel detection approach based on browser API usage patterns. Finally, we evaluate the accuracy of our detection model, after training it with the real-life crypto-mining scenario, through a large scale analysis of the most popular Web sites

    Adaptivity of 3D web content in web-based virtual museums : a quality of service and quality of experience perspective

    Get PDF
    The 3D Web emerged as an agglomeration of technologies that brought the third dimension to the World Wide Web. Its forms spanned from being systems with limited 3D capabilities to complete and complex Web-Based Virtual Worlds. The advent of the 3D Web provided great opportunities to museums by giving them an innovative medium to disseminate collections' information and associated interpretations in the form of digital artefacts, and virtual reconstructions thus leading to a new revolutionary way in cultural heritage curation, preservation and dissemination thereby reaching a wider audience. This audience consumes 3D Web material on a myriad of devices (mobile devices, tablets and personal computers) and network regimes (WiFi, 4G, 3G, etc.). Choreographing and presenting 3D Web components across all these heterogeneous platforms and network regimes present a significant challenge yet to overcome. The challenge is to achieve a good user Quality of Experience (QoE) across all these platforms. This means that different levels of fidelity of media may be appropriate. Therefore, servers hosting those media types need to adapt to the capabilities of a wide range of networks and devices. To achieve this, the research contributes the design and implementation of Hannibal, an adaptive QoS & QoE-aware engine that allows Web-Based Virtual Museums to deliver the best possible user experience across those platforms. In order to ensure effective adaptivity of 3D content, this research furthers the understanding of the 3D web in terms of Quality of Service (QoS) through empirical investigations studying how 3D Web components perform and what are their bottlenecks and in terms of QoE studying the subjective perception of fidelity of 3D Digital Heritage artefacts. Results of these experiments lead to the design and implementation of Hannibal
    corecore