245 research outputs found

    Reliable Client Accounting for Hybrid Content-Distribution Networks

    Get PDF
    Content distribution networks (CDNs) have started to adopt hybrid designs, which employ both dedicated edge servers and resources contributed by clients. Hybrid designs combine many of the advantages of infrastructurebased and peer-to-peer systems, but they also present new challenges. This paper identifies reliable client accounting as one such challenge. Operators of hybrid CDNs are accountable to their customers (i.e., content providers) for the CDN’s performance. Therefore, they need to offer reliable quality of service and a detailed account of content served. Service quality and accurate accounting, however, depend in part on interactions among untrusted clients. Using the Akamai NetSession client network in a case study, we demonstrate that a small number of malicious clients used in a clever attack could cause significant accounting inaccuracies. We present a method for providing reliable accounting of client interactions in hybrid CDNs. The proposed method leverages the unique characteristics of hybrid systems to limit the loss of accounting accuracy and service quality caused by faulty or compromised clients. We also describe RCA, a system that applies this method to a commercial hybrid content-distribution network. Using trace-driven simulations, we show that RCA can detect and mitigate a variety of attacks, at the expense of a moderate increase in logging overhead

    Development of a system compliant with the Application-Layer Traffic Optimization Protocol

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaWith the ever-increasing Internet usage that is following the start of the new decade, the need to optimize this world-scale network of computers becomes a big priority in the technological sphere that has the number of users rising, as are the Quality of Service (QoS) demands by applications in domains such as media streaming or virtual reality. In the face of rising traffic and stricter application demands, a better understand ing of how Internet Service Providers (ISPs) should manage their assets is needed. An important concern regards to how applications utilize the underlying network infras tructure over which they reside. Most of these applications act with little regard for ISP preferences, as exemplified by their lack of care in achieving traffic locality during their operation, which would be a preferable feature for network administrators, and that could also improve application performance. However, even a best-effort attempt by applications to cooperate will hardly succeed if ISP policies aren’t clearly commu nicated to them. Therefore, a system to bridge layer interests has much potential in helping achieve a mutually beneficial scenario. The main focus of this thesis is the Application-Layer Traffic Optimization (ALTO) work ing group, which was formed by the Internet Engineering Task Force (IETF) to explore standardizations for network information retrieval. This group specified a request response protocol where authoritative entities provide resources containing network status information and administrative preferences. Sharing of infrastructural insight is done with the intent of enabling a cooperative environment, between the network overlay and underlay, during application operations, to obtain better infrastructural re sourcefulness and the consequential minimization of the associated operational costs. This work gives an overview of the historical network tussle between applications and service providers, presents the ALTO working group’s project as a solution, im plements an extended system built upon their ideas, and finally verifies the developed system’s efficiency, in a simulation, when compared to classical alternatives.Com o acrescido uso da Internet que acompanha o início da nova década, a necessidade de otimizar esta rede global de computadores passa a ser uma grande prioridade na esfera tecnológica que vê o seu número de utilizadores a aumentar, assim como a exigência, por parte das aplicações, de novos padrões de Qualidade de Serviço (QoS), como visto em domínios de transmissão de conteúdo multimédia em tempo real e em experiências de realidade virtual. Face ao aumento de tráfego e aos padrões de exigência aplicacional mais restritos, é necessário melhor compreender como os fornecedores de serviços Internet (ISPs) devem gerir os seus recursos. Um ponto fulcral é como aplicações utilizam os seus recursos da rede, onde muitas destas não têm consideração pelas preferências dos ISPs, como exemplificado pela sua falta de esforço em localizar tráfego, onde o contrário seria preferível por administradores de rede e teria potencial para melhorar o desempenho aplicacional. Uma tentativa de melhor esforço, por parte das aplicações, em resolver este problema, não será bem-sucedida se as preferências administrativas não forem claramente comunicadas. Portanto, um sistema que sirva de ponte de comunicação entre camadas pode potenciar um cenário mutuamente benéfico. O foco principal desta tese é o grupo de trabalho Application-Layer Traffic Optimization (ALTO), que foi formado pelo Internet Engineering Task Force (IETF) para explorar estandardizações para recolha de informação da rede. Este grupo especificou um protocolo onde entidades autoritárias disponibilizam recursos com informação de estado de rede, e preferências administrativas. A partilha de conhecimento infraestrutural é feita para possibilitar um ambiente cooperativo entre redes overlay e underlay, para uma mais eficiente utilização de recursos e a consequente minimização de custos operacionais. É pretendido dar uma visão da histórica disputa entre aplicações e ISPs, assim como apresentar o projeto do grupo de trabalho ALTO como solução, implementar e melhorar sobre as suas ideias, e finalmente verificar a eficiência do sistema numa simulação, quando comparado com alternativas clássicas

    TOWARDS PRIVACY-PRESERVING AND ROBUST WEB OVERLAYS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Challenges, Designs, and Performances of Large-Scale Open-P2SP Content Distribution

    Full text link

    Videosisällön jakelu Internetin välityksellä

    Get PDF
    Popularity of multimedia streaming services has created great demand for reliable and effective content delivery over unreliable networks, such as the Internet. Currently, a significant part of the Internet data traffic is generated by video streaming applications. The multimedia streaming services are often bandwidth-heavy and are prone to delays or any other varying network conditions. In order to address high demands of real-time multimedia streaming applications, specialized solutions called content delivery networks, have emerged. A content delivery network consists of many geographically distributed replica servers, often deployed close to the end-users. This study consists of two parts and a set of interviews. First part explores development of video technologies and their relation to network bandwidth requirements. Second part proceeds to present the content delivery mechanisms related to video distribution over the Internet. Lastly, the interviews of selected experts was used to gain more relevant and realistic insights for two first parts. The results offer a wide overview of content delivery related findings ranging from streaming techniques to quality of experience. How the video related development progress would affect the future networks and what kind of content delivery models are mostly used in the modern Internet.Multimediapalveluiden suosio on noussut huomattavasti viime vuosina. Videoliikenteen osuus kaikesta tiedonsiirrosta Internetissä on kasvanut merkittävästi. Tämä on luonut suuren tarpeen luotettaville ja tehokkaille videosisällön siirtämisen keinoille epäluotettavien verkkojen yli. Videon suoratoistopalvelut ovat herkkiä verkossa tapahtuville häiriöille ja lisäksi ne vaativat usein verkolta paljon tiedonsiirtokapasiteettia. Ratkaistakseen multimedian reaaliaikaisen tiedonsiirron vaatimukset on kehitetty sisällönsiirtoon erikoistuneita verkkoja (eng. content deliver network - CDN). Nämä sisällönjakoon erikoistuneet verkot ovat fyysisesti hajautettuja kokonaisuuksia. Yleensä ne sijoitetaan mahdollisimman lähelle kohdekäyttäjäryhmää. Tämä työ koostuu kahdesta osasta ja asiantuntijahaastatteluista. Ensimmäinen osa keskittyy taustatietojen keräämiseen, videotekniikoiden kehitykseen ja sen siirtoon liittyviin haasteisiin. Toinen osa esittelee sisällönjaon toiminnot liittyen suoratoistopalveluiden toteutukseen. Haastatteluiden tarkoitus on tuoda esille asiantuntijoiden näkemyksiä kirjallisuuskatsauksen tueksi. Tulokset tarjoavat laajan katsauksen suoratoistopalveluiden sisällönjakotekniikoista, aina videon kehityksestä palvelun käyttökokemukseen saakka. Miten videon kuvanlaadun ja pakkaamisen kehitys voisi vaikuttaa tulevien verkkoteknologioiden kehitykseen Internet-pohjaisessa sisällönjakelussa

    Understanding smart contracts as a new option in transaction cost economics

    Get PDF
    Among different concepts associated with the term blockchain, smart contracts have been a prominent one, especially popularized by the Ethereum platform. In this study, we unpack this concept within the framework of Transaction Cost Economics (TCE). This institutional economics theory emphasizes the role of distinctive (private and public) contract law regimes in shaping firm boundaries. We propose that widespread adoption of the smart contract concept creates a new option in public contracting, which may give rise to a smart-contract-augmented contract law regime. We discuss tradeoffs involved in the attractiveness of the smart contract concept for firms and the resulting potential for change in firm boundaries. Based on our new conceptualization, we discuss potential roles the three branches of government – judicial, executive, and legislative – in enabling and using this new contract law regime. We conclude the paper by pointing out limitations of the TCE perspective and suggesting future research directions

    Advanced Digital Auditing

    Get PDF
    This open access book discusses the most modern approach to auditing complex digital systems and technologies. It combines proven auditing approaches, advanced programming techniques and complex application areas, and covers the latest findings on theory and practice in this rapidly developing field. Especially for those who want to learn more about novel approaches to testing complex information systems and related technologies, such as blockchain and self-learning systems, the book will be a valuable resource. It is aimed at students and practitioners who are interested in contemporary technology and managerial implications
    corecore