23 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationWe develop a novel framework for friend-to-friend (f2f) distributed services (F3DS) by which applications can easily offer peer-to-peer (p2p) services among social peers with resource sharing governed by approximated levels of social altruism. Our frame- work differs significantly from typical p2p collaboration in that it provides a founda- tion for distributed applications to cooperate based on pre-existing trust and altruism among social peers. With the goal of facilitating the approximation of relative levels of altruism among social peers within F3DS, we introduce a new metric: SocialDistance. SocialDistance is a synthetic metric that combines direct levels of altruism between peers with an altruism decay for each hop to approximate indirect levels of altruism. The resulting multihop altruism levels are used by F3DS applications to proportion and prioritize the sharing of resources with other social peers. We use SocialDistance to implement a novel flash file/patch distribution method, SocialSwarm. SocialSwarm uses the SocialDistance metric as part of its resource allocation to overcome the neces- sity of (and inefficiency created by) resource bartering among friends participating in a BitTorrent swarm. We find that SocialSwarm achieves an average file download time reduction of 25% to 35% in comparison with standard BitTorrent under a variety of configurations and conditions, including file sizes, maximum SocialDistance, as well as leech and seed counts. The most socially connected peers yield up to a 47% decrease in download completion time in comparison with average nonsocial BitTorrent swarms. We also use the F3DS framework to implement novel malware detection application- F3DS Antivirus (F3AV)-and evaluate it on the Amazon cloud. We show that with f2f sharing of resources, F3AV achieves a 65% increase in the detection rate of 0- to 1-day-old malware among social peers as compared to the average of individual scanners. Furthermore, we show that F3AV provides the greatest diversity of mal- ware scanners (and thus malware protection) to social hubs-those nodes that are positioned to provide strategic defense against socially aware malware

    Static Web content distribution and request routing in a P2P overlay

    Get PDF
    The significance of collaboration over the Internet has become a corner-stone of modern computing, as the essence of information processing and content management has shifted to networked and Webbased systems. As a result, the effective and reliable access to networked resources has become a critical commodity in any modern infrastructure. In order to cope with the limitations introduced by the traditional client-server networking model, most of the popular Web-based services have employed separate Content Delivery Networks (CDN) to distribute the server-side resource consumption. Since the Web applications are often latency-critical, the CDNs are additionally being adopted for optimizing the content delivery latencies perceived by the Web clients. Because of the prevalent connection model, the Web content delivery has grown to a notable industry. The rapid growth in the amount of mobile devices further contributes to the amount of resources required from the originating server, as the content is also accessible on the go. While the Web has become one of the most utilized sources of information and digital content, the openness of the Internet is simultaneously being reduced by organizations and governments preventing access to any undesired resources. The access to information may be regulated or altered to suit any political interests or organizational benefits, thus conflicting with the initial design principle of an unrestricted and independent information network. This thesis contributes to the development of more efficient and open Internet by combining a feasibility study and a preliminary design of a peer-to-peer based Web content distribution and request routing mechanism. The suggested design addresses both the challenges related to effectiveness of current client-server networking model and the openness of information distributed over the Internet. Based on the properties of existing peer-to-peer implementations, the suggested overlay design is intended to provide low-latency access to any Web content without sacrificing the end-user privacy. The overlay is additionally designed to increase the cost of censorship by forcing a successful blockade to isolate the censored network from the rest of the Internet

    Is Online Copyright Enforcement Scalable?

    Get PDF
    This Article examines P2P file sharing and the copyright enforcement problem it has created through the lens of scalability. Part I traces the evolution of peer-to-peer (P2P) networks from Napster to BitTorrent, with a focus on the relative scalability of successive architectures. Part II takes up the difficult question of the scale of P2P infringement and its harms, examining the strategic number-crunching that underlies industry data on piracy, the government\u27s credulous acceptance of that data, and the risk of letting industry hyperbole drive copyright policy and law enforcement priorities. Part III evaluates the efficacy of the Digital Millennium Copyright Act (DMCA) as a policy mechanism for scaling up online copyright enforcement. I argue in Part III that the DMCA has proven to be remarkably scalable for enforcing copyrights in hosted content but has altogether failed to scale in the context of P2P file sharing, leading to the dysfunctional workaround of mass John Doe litigation. Part IV weighs the costs and benefits of more scalable alternatives to mass litigation, including a potential amendment of the DMCA\u27s pre-litigation subpoena provision and a pair of administrative dispute resolution systems-one hypothetical, the other real-for streamlining adjudication of P2P infringement claims

    A credit-based approach to scalable video transmission over a peer-to-peer social network

    Get PDF
    PhDThe objective of the research work presented in this thesis is to study scalable video transmission over peer-to-peer networks. In particular, we analyse how a credit-based approach and exploitation of social networking features can play a significant role in the design of such systems. Peer-to-peer systems are nowadays a valid alternative to the traditional client-server architecture for the distribution of multimedia content, as they transfer the workload from the service provider to the final user, with a subsequent reduction of management costs for the former. On the other hand, scalable video coding helps in dealing with network heterogeneity, since the content can be tailored to the characteristics or resources of the peers. First of all, we present a study that evaluates subjective video quality perceived by the final user under different transmission scenarios. We also propose a video chunk selection algorithm that maximises received video quality under different network conditions. Furthermore, challenges in building reliable peer-to-peer systems for multimedia streaming include optimisation of resource allocation and design mechanisms based on rewards and punishments that provide incentives for users to share their own resources. Our solution relies on a credit-based architecture, where peers do not interact with users that have proven to be malicious in the past. Finally, if peers are allowed to build a social network of trusted users, they can share the local information they have about the network and have a more complete understanding of the type of users they are interacting with. Therefore, in addition to a local credit, a social credit or social reputation is introduced. This thesis concludes with an overview of future developments of this research work

    Video-on-Demand over Internet: a survey of existing systems and solutions

    Get PDF
    Video-on-Demand is a service where movies are delivered to distributed users with low delay and free interactivity. The traditional client/server architecture experiences scalability issues to provide video streaming services, so there have been many proposals of systems, mostly based on a peer-to-peer or on a hybrid server/peer-to-peer solution, to solve this issue. This work presents a survey of the currently existing or proposed systems and solutions, based upon a subset of representative systems, and defines selection criteria allowing to classify these systems. These criteria are based on common questions such as, for example, is it video-on-demand or live streaming, is the architecture based on content delivery network, peer-to-peer or both, is the delivery overlay tree-based or mesh-based, is the system push-based or pull-based, single-stream or multi-streams, does it use data coding, and how do the clients choose their peers. Representative systems are briefly described to give a summarized overview of the proposed solutions, and four ones are analyzed in details. Finally, it is attempted to evaluate the most promising solutions for future experiments. Résumé La vidéo à la demande est un service où des films sont fournis à distance aux utilisateurs avec u

    PDRM : a proactive data replication mechanism to improve content mobility support in NDN using location awareness

    Get PDF
    The problem of handling user mobility has been around since mobile devices became capable of handling multimedia content and is still one of the most relevant challenges in networking. The conventional Internet architecture is inadequate in dealing with an ever-growing number of mobile devices that are both consuming and producing content. Named Data Networking (NDN) is a network architecture that can potentially overcome this mobility challenge. It supports consumer mobility by design but fails to offer the same level of support for content mobility. Content mobility requires guaranteeing that consumers manage to find and retrieve desired content even when the corresponding producer (or primary host) is not available. In this thesis, we propose PDRM, a Proactive and locality-aware Data Replication Mechanism that increases content availability through data redundancy in the context of the NDN architecture. It explores available resources from end-users in the vicinity to improve content availability even in the case of producer mobility. Throughout the thesis, we discuss the design of PDRM, evaluate the impact of the number of available providers in the vicinity and in-network cache capacity on its operation, and compare its performance to Vanilla NDN and two state-of-the-art proposals. The evaluation indicates that PDRM improves content mobility support due to using object popularity information and spare resources in the vicinity to help the proactive replication. Results show that PDRM can reduce the download times up to 53.55%, producer load up to 71.6%, inter-domain traffic up to 46.5%, and generated overhead up to 25% compared to Vanilla NDN and other evaluated mechanisms.O problema de lidar com a mobilidade dos usuários existe desde que os dispositivos móveis se tornaram capazes de lidar com conteúdo multimídia e ainda é um dos desafios mais relevantes na área de redes de computadores. A arquitetura de Internet convencional é inadequada em lidar com um número cada vez maior de dispositivos móveis que estão tanto consumindo quanto produzindo conteúdo. Named Data Networking (NDN) é uma arquitetura de rede que pode potencialmente superar este desafio de mobilidade. Ela suporta a mobilidade do consumidor nativamente, mas não oferece o mesmo nível de suporte para a mobilidade de conteúdo. A mobilidade de conteúdo exige garantir que os consumidores consigam encontrar e recuperar o conteúdo desejado mesmo quando o produtor correspondente (ou o hospedeiro principal) não estiver disponível. Nesta tese, propomos o PDRM (Proactive Data Replication Mechanism), um mecanismo de replicação de dados proativo e consciente de localização, que aumenta a disponibilidade de conteúdo através da redundância de dados no contexto da arquitetura NDN. Ele explora os recursos disponíveis dos usuários finais na vizinhança para melhorar a disponibilidade de conteúdo, mesmo no caso da mobilidade do produtor. Ao longo da tese, discutimos o projeto do PDRM, avaliamos o impacto do número de provedores disponíveis na vizinhança e a capacidade de cache na rede em sua operação e comparamos seu desempenho com NDN padrão e duas propostas do estado-da-arte. A avaliação indica que o PDRM melhora o suporte à mobilidade de conteúdo devido ao uso de informações de popularidade dos objetos e recursos extras na vizinhança para ajudar a replicação pró-ativa. Os resultados mostram que o PDRM pode reduzir os tempos de download até 53,55%, o carregamento do produtor até 71,6%, o tráfego entre domínios até 46,5% e a sobrecarga gerada até 25% em comparação com NDN padrão e os demais mecanismos avaliados

    Resource management for next generation multi-service mobile network

    Get PDF

    Naming and discovery in networks : architecture and economics

    Get PDF
    In less than three decades, the Internet was transformed from a research network available to the academic community into an international communication infrastructure. Despite its tremendous success, there is a growing consensus in the research community that the Internet has architectural limitations that need to be addressed in a effort to design a future Internet. Among the main technical limitations are the lack of mobility support, and the lack of security and trust. The Internet, and particularly TCP/IP, identifies endpoints using a location/routing identifier, the IP address. Coupling the endpoint identifier to the location identifier hinders mobility and poorly identifies the actual endpoint. On the other hand, the lack of security has been attributed to limitations in both the network and the endpoint. Authentication for example is one of the main concerns in the architecture and is hard to implement partly due to lack of identity support. The general problem that this dissertation is concerned with is that of designing a future Internet. Towards this end, we focus on two specific sub-problems. The first problem is the lack of a framework for thinking about architectures and their design implications. It was obvious after surveying the literature that the majority of the architectural work remains idiosyncratic and descriptions of network architectures are mostly idiomatic. This has led to the overloading of architectural terms, and to the emergence of a large body of network architecture proposals with no clear understanding of their cross similarities, compatibility points, their unique properties, and architectural performance and soundness. On the other hand, the second problem concerns the limitations of traditional naming and discovery schemes in terms of service differentiation and economic incentives. One of the recurring themes in the community is the need to separate an entity\u27s identifier from its locator to enhance mobility and security. Separation of identifier and locator is a widely accepted design principle for a future Internet. Separation however requires a process to translate from the identifier to the locator when discovering a network path to some identified entity. We refer to this process as identifier-based discovery, or simply discovery, and we recognize two limitations that are inherent in the design of traditional discovery schemes. The first limitation is the homogeneity of the service where all entities are assumed to have the same discovery performance requirements. The second limitation is the inherent incentive mismatch as it relates to sharing the cost of discovery. This dissertation addresses both subproblems, the architectural framework as well as the naming and discovery limitations

    SDSF : social-networking trust based distributed data storage and co-operative information fusion.

    Get PDF
    As of 2014, about 2.5 quintillion bytes of data are created each day, and 90% of the data in the world was created in the last two years alone. The storage of this data can be on external hard drives, on unused space in peer-to-peer (P2P) networks or using the more currently popular approach of storing in the Cloud. When the users store their data in the Cloud, the entire data is exposed to the administrators of the services who can view and possibly misuse the data. With the growing popularity and usage of Cloud storage services like Google Drive, Dropbox etc., the concerns of privacy and security are increasing. Searching for content or documents, from this distributed stored data, given the rate of data generation, is a big challenge. Information fusion is used to extract information based on the query of the user, and combine the data and learn useful information. This problem is challenging if the data sources are distributed and heterogeneous in nature where the trustworthiness of the documents may be varied. This thesis proposes two innovative solutions to resolve both of these problems. Firstly, to remedy the situation of security and privacy of stored data, we propose an innovative Social-based Distributed Data Storage and Trust based co-operative Information Fusion Framework (SDSF). The main objective is to create a framework that assists in providing a secure storage system while not overloading a single system using a P2P like approach. This framework allows the users to share storage resources among friends and acquaintances without compromising the security or privacy and enjoying all the benefits that the Cloud storage offers. The system fragments the data and encodes it to securely store it on the unused storage capacity of the data owner\u27s friends\u27 resources. The system thus gives a centralized control to the user over the selection of peers to store the data. Secondly, to retrieve the stored distributed data, the proposed system performs the fusion also from distributed sources. The technique uses several algorithms to ensure the correctness of the query that is used to retrieve and combine the data to improve the information fusion accuracy and efficiency for combining the heterogeneous, distributed and massive data on the Cloud for time critical operations. We demonstrate that the retrieved documents are genuine when the trust scores are also used while retrieving the data sources. The thesis makes several research contributions. First, we implement Social Storage using erasure coding. Erasure coding fragments the data, encodes it, and through introduction of redundancy resolves issues resulting from devices failures. Second, we exploit the inherent concept of trust that is embedded in social networks to determine the nodes and build a secure net-work where the fragmented data should be stored since the social network consists of a network of friends, family and acquaintances. The trust between the friends, and availability of the devices allows the user to make an informed choice about where the information should be stored using `k\u27 optimal paths. Thirdly, for the purpose of retrieval of this distributed stored data, we propose information fusion on distributed data using a combination of Enhanced N-grams (to ensure correctness of the query), Semantic Machine Learning (to extract the documents based on the context and not just bag of words and also considering the trust score) and Map Reduce (NSM) Algorithms. Lastly we evaluate the performance of distributed storage of SDSF using era- sure coding and identify the social storage providers based on trust and evaluate their trustworthiness. We also evaluate the performance of our information fusion algorithms in distributed storage systems. Thus, the system using SDSF framework, implements the beneficial features of P2P networks and Cloud storage while avoiding the pitfalls of these systems. The multi-layered encrypting ensures that all other users, including the system administrators cannot decode the stored data. The application of NSM algorithm improves the effectiveness of fusion since large number of genuine documents are retrieved for fusion
    corecore