418 research outputs found

    A framework for highly reconfigurable P2P trackers

    Get PDF
    The increasing use of Peer to Peer (P2P) applications, usually ruled by selfish behaviors, is posing new challenges to the research community. As contributions of this work we firstly devise a general framework underpinning the development of highly reconfigurable P2P trackers. Following that, a novel tracker architecture is proposed and several illustrative and enhanced tracker configurations are described. As result, the devised solution turns possible that flexible, programmable and adaptive peer selection mechanisms can be introduced at the P2P application level. The proposed solution assumes the general framework of one of the most popular P2P solutions, in this case a BitTorrent-like approach. As illustrative examples of the proposed framework capabilities, several straightforward and easy to deploy tracker configuration examples are presented, including methods for qualitative differentiation of swarm peers and advanced P2P Traffic Engineering mechanisms fostering the collaboration efforts between ISPs and P2P applications. Both the framework and the devised tracker configurations are validated resorting to simulation experiments.(undefined

    Policy-Aware Virtual relay placement for inter-domain path diversity

    Full text link
    Exploiting path diversity to enhance communication reliability is a key desired property in Internet. While the existing routing architecture is reluctant to adopt changes, overlay routing has been proposed to circumvent the constraints of native routing by employing intermediary relays. However, the selfish interdomain relay placement may violate local routing policies at intermediary relays and thus affect their economic costs and performances. With the recent advance of the concept of network virtualization, it is envisioned that virtual networks should be provisioned in cooperation with infrastructure providers in a holistic view without compromising their profits. In this paper, the problem of policy-aware virtual relay placement is first studied to investigate the feasibility of provisioning policycompliant multipath routing via virtual relays for inter-domain communication reliability. By evaluation on a real domain-level Internet topology, it is demonstrated that policy-compliant virtual relaying can achieve a similar protection gain against single link failures compared to its selfish counterpart. It is also shown that the presented heuristic placement strategies perform well to approach the optimal solution

    Autonomous-System Interfaces

    Get PDF
    The current Internet is characterized by a growing tension between the "core" (the Internet service providers) and the "edge" (the operators of edge networks and distributed applications). Much of this tension concerns path visibility and control -- where traffic goes (route control), where traffic comes from (path identification and filtering), and what happened in between (monitoring and accountability). We argue that this conflict harms both the core and the edge and that, to resolve it, we have to expose the Autonomous System (AS) as a first-class Internet object. This would map the functional structure of the Internet (the granularity at which edge systems can observe and control their traffic) to the organizational one (a graph of ASes). We argue that providing a well-defined interface between core and edge ASes offers significant benefits to both of them

    Distributed Internet security and measurement

    Get PDF
    The Internet has developed into an important economic, military, academic, and social resource. It is a complex network, comprised of tens of thousands of independently operated networks, called Autonomous Systems (ASes). A significant strength of the Internet\u27s design, one which enabled its rapid growth in terms of users and bandwidth, is that its underlying protocols (such as IP, TCP, and BGP) are distributed. Users and networks alike can attach and detach from the Internet at will, without causing major disruptions to global Internet connectivity. This dissertation shows that the Internet\u27s distributed, and often redundant structure, can be exploited to increase the security of its protocols, particularly BGP (the Internet\u27s interdomain routing protocol). It introduces Pretty Good BGP, an anomaly detection protocol coupled with an automated response that can protect individual networks from BGP attacks. It also presents statistical measurements of the Internet\u27s structure and uses them to create a model of Internet growth. This work could be used, for instance, to test upcoming routing protocols on ensemble of large, Internet-like graphs. Finally, this dissertation shows that while the Internet is designed to be agnostic to political influence, it is actually quite centralized at the country level. With the recent rise in country-level Internet policies, such as nation-wide censorship and warrantless wiretaps, this centralized control could have significant impact on international reachability

    NPA-BT: A Network Performance Aware BitTorrent Traffic Optimization Mechanism

    Full text link

    Towards effective control of P2P traffic aggregates in network infrastructures

    Get PDF
    Nowadays, many P2P applications proliferate in the Internet. The attractiveness of many of these systems relies on the collaborative approach used to exchange large resources without the dependence and associated constraints of centralized approaches where a single server is responsible to handle all the requests from the clients. As consequence, some P2P systems are also interesting and cost-effective approaches to be adopted by content-providers and other Internet players. However, there are several coexistence problems between P2P applications and In- ternet Service Providers (ISPs) due to the unforeseeable behavior of P2P traffic aggregates in ISP infrastructures. In this context, this work proposes a collaborative P2P/ISP system able to underpin the development of novel Traffic Engi- neering (TE) mechanisms contributing for a better coexistence between P2P applications and ISPs. Using the devised system, two TE methods are described being able to estimate and control the impact of P2P traffic aggregates on the ISP network links. One of the TE methods allows that ISP administrators are able to foresee the expected impact that a given P2P swarm will have in the underlying network infrastructure. The other TE method enables the definition of ISP friendly P2P topologies, where specific network links are protected from P2P traffic. As result, the proposed system and associated mechanisms will contribute for improved ISP resource management tasks and to foster the deployment of innovative ISP-friendly systems.This work has been partially supported by FCT - Fundação para a Ciência e Tecnologia Portugal in the scope of the project: UID/CEC/00319/2013

    Towards effective control of P2P traffic aggregates in network infrastructures

    Get PDF
    Nowadays, many P2P applications proliferate in the Internet. The attractiveness of many of these systems relies on the collaborative approach used to exchange large resources without the dependence and associated constraints of centralized approaches where a single server is responsible to handle all the requests from the clients. As consequence, some P2P systems are also interesting and cost-effective approaches to be adopted by content-providers and other Internet players. However, there are several coexistence problems between P2P applications and In- ternet Service Providers (ISPs) due to the unforeseeable behavior of P2P traffic aggregates in ISP infrastructures. In this context, this work proposes a collaborative P2P/ISP system able to underpin the development of novel Traffic Engi- neering (TE) mechanisms contributing for a better coexistence between P2P applications and ISPs. Using the devised system, two TE methods are described being able to estimate and control the impact of P2P traffic aggregates on the ISP network links. One of the TE methods allows that ISP administrators are able to foresee the expected impact that a given P2P swarm will have in the underlying network infrastructure. The other TE method enables the definition of ISP friendly P2P topologies, where specific network links are protected from P2P traffic. As result, the proposed system and associated mechanisms will contribute for improved ISP resource management tasks and to foster the deployment of innovative ISP-friendly systems. This work has been partially supported by FCT - Fundação para a Ciência e Tecnologia Portugal in the scope of the project: UID/CEC/00319/2013. Document type: Articl

    A pragmatic approach toward securing inter-domain routing

    Get PDF
    Internet security poses complex challenges at different levels, where even the basic requirement of availability of Internet connectivity becomes a conundrum sometimes. Recent Internet service disruption events have made the vulnerability of the Internet apparent, and exposed the current limitations of Internet security measures as well. Usually, the main cause of such incidents, even in the presence of the security measures proposed so far, is the unintended or intended exploitation of the loop holes in the protocols that govern the Internet. In this thesis, we focus on the security of two different protocols that were conceived with little or no security mechanisms but play a key role both in the present and the future of the Internet, namely the Border Gateway Protocol (BGP) and the Locator Identifier Separation Protocol (LISP). The BGP protocol, being the de-facto inter-domain routing protocol in the Internet, plays a crucial role in current communications. Due to lack of any intrinsic security mechanism, it is prone to a number of vulnerabilities that can result in partial paralysis of the Internet. In light of this, numerous security strategies were proposed but none of them were pragmatic enough to be widely accepted and only minor security tweaks have found the pathway to be adopted. Even the recent IETF Secure Inter-Domain Routing (SIDR) Working Group (WG) efforts including, the Resource Public Key Infrastructure (RPKI), Route Origin authorizations (ROAs), and BGP Security (BGPSEC) do not address the policy related security issues, such as Route Leaks (RL). Route leaks occur due to violation of the export routing policies among the Autonomous Systems (ASes). Route leaks not only have the potential to cause large scale Internet service disruptions but can result in traffic hijacking as well. In this part of the thesis, we examine the route leak problem and propose pragmatic security methodologies which a) require no changes to the BGP protocol, b) are neither dependent on third party information nor on third party security infrastructure, and c) are self-beneficial regardless of their adoption by other players. Our main contributions in this part of the thesis include a) a theoretical framework, which, under realistic assumptions, enables a domain to autonomously determine if a particular received route advertisement corresponds to a route leak, and b) three incremental detection techniques, namely Cross-Path (CP), Benign Fool Back (BFB), and Reverse Benign Fool Back (R-BFB). Our strength resides in the fact that these detection techniques solely require the analytical usage of in-house control-plane, data-plane and direct neighbor relationships information. We evaluate the performance of the three proposed route leak detection techniques both through real-time experiments as well as using simulations at large scale. Our results show that the proposed detection techniques achieve high success rates for countering route leaks in different scenarios. The motivation behind LISP protocol has shifted over time from solving routing scalability issues in the core Internet to a set of vital use cases for which LISP stands as a technology enabler. The IETF's LISP WG has recently started to work toward securing LISP, but the protocol still lacks end-to-end mechanisms for securing the overall registration process on the mapping system ensuring RLOC authorization and EID authorization. As a result LISP is unprotected against different attacks, such as RLOC spoofing, which can cripple even its basic functionality. For that purpose, in this part of the thesis we address the above mentioned issues and propose practical solutions that counter them. Our solutions take advantage of the low technological inertia of the LISP protocol. The changes proposed for the LISP protocol and the utilization of existing security infrastructure in our solutions enable resource authorizations and lay the foundation for the needed end-to-end security

    The regulation of internet interconnection : assessing network market power

    Get PDF
    Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 59-64).Interconnection agreements in the telecommunications industry have always been constrained by regulation. Internet interconnection has not received the same level of scrutiny. Recent debates regarding proposed mergers, network neutrality, Internet peering, and last mile competition have generated much discussion about whether Internet interconnection regulation is warranted. In order to determine whether such regulation is necessary, policymakers need appropriate metrics to help gauge a network provider's market power. Since Internet interconnection agreements are typically not published publicly, policymakers must instead rely on proxy metrics and inferred interconnection relationships. Alessio D'Ignazio and Emanuele Giovannetti have attempted to address this challenge by proposing a standard set of metrics that are based on and assessed using network topology data. They suggest two metrics, referred to as customer cone and betweenness, as proxies for market size and market power. This thesis focuses on the efficacy of the proposed customer cone and betweenness metrics as proxies for network market size and market power.by Elisabeth M. Maida.S.M.S.M.in Technology and Polic

    Security Auditing and Multi-Tenancy Threat Evaluation in Public Cloud Infrastructures

    Get PDF
    Cloud service providers typically adopt the multi-tenancy model to optimize resources usage and achieve the promised cost-effectiveness. However, multi-tenancy in the cloud is a double-edged sword. While it enables cost-effective resource sharing, it increases security risks for the hosted applications. Indeed, multiplexing virtual resources belonging to different tenants on the same physical substrate may lead to critical security concerns such as cross-tenant data leakage and denial of service. Therefore, there is an increased necessity and a pressing need to foster transparency and accountability in multi-tenant clouds. In this regard, auditing security compliance of the cloud provider’s infrastructure against standards, regulations and customers’ policies on one side, and evaluating the multi-tenancy threat on the other side, take on an increasing importance to boost the trust between the cloud stakeholders. However, auditing virtual infrastructures is challenging due to the dynamic and layered nature of the cloud. Particularly, inconsistencies in network isolation mechanisms across the cloud stack layers (e.g., the infrastructure management layer and the implementation layer), may lead to virtual network isolation breaches that might be undetectable at a single layer. Additionally, evaluating multi-tenancy threats in the cloud requires systematic ways and effective metrics, which are largely missing in the literature. This thesis work addresses the aforementioned challenges and limitations and articulates around two main topics, namely, security compliance auditing and multi-tenancy threat evaluation in the cloud. Our objective in the first topic is to propose an automated framework that allows auditing the cloud infrastructure from the structural point of view, while focusing on virtualization-related security properties and consistency between multiple control layers. To this end, we devise a multi-layered model related to each cloud stack layer’s view in order to capture the semantics of the audited data and its relation to consistent isolation requirements. Furthermore, we integrate our auditing system into OpenStack, and present our experimental results on assessing several properties related to virtual network isolation and consistency. Our results show that our approach can be successfully used to detect virtual network isolation breaches for large OpenStack-based data centers in a reasonable time. The objective of the second topic is to derive security metrics for evaluating the multi-tenancy threats in public clouds. To this end, we propose security metrics to quantify the proximity between tenants’ virtual resources inside the cloud. Those metrics are defined based on the configuration and deployment of a cloud, such that a cloud provider may apply them to evaluate and mitigate co-residency threats. To demonstrate the effectiveness of our metrics and show their usefulness, we conduct case studies based on both real and synthetic cloud data. We further perform extensive simulations using CloudSim and wellknown VM placement policies. The results show that our metrics effectively capture the impact of potential attacks, and the abnormal degrees of co-residency between a victim and potential attackers, which paves the way for the design of effective mitigation solutions against co-residency attacks
    • …
    corecore