12 research outputs found

    Using forgetful routing to control BGP table size

    Full text link

    Tragedy of the routing table: An analysis of collective action amongst Internet network operators

    Get PDF
    S.M. thesisThis thesis analyzes and discusses the effectiveness of social efforts to achieve collective action amongst Internet network operators in order to manage the growth of the Internet routing table. The size and rate of growth of the Internet routing table is an acknowledged challenge impeding the scalability of our BGP interdomain routing architecture. While most of the work towards a solution to this problem has focused on architectural improvements, an effort launched in the 1990s called the CIDR Report attempts to incentivize route aggregation using social forces and norms in the Internet operator community. This thesis analyzes the behavior of Internet network operators in response to the CIDR Report from 1997 to 2011 to determine whether the Report was effective in achieving this goal. While it is difficult to causally attribute aggregation behavior to appearance on the CIDR report, there is a trend for networks to improve their prefix aggregation following an appearance on the CIDR Report compared to untreated networks. This suggests that the CIDR Report did affect network aggregation behavior, although the routing table continued to grow. This aggregation improvement is most prevalent early in the study period and becomes less apparent as time goes on. Potential causes of the apparent change in efficacy of the Report are discussed and examined using Ostrom s Common Pool Resource framework. The thesis then concludes with a discussion of options for mitigating routing table growth, including the continued use of community forces to better manage the Internet routing table.S.M

    Analysis of collective action amongst Internet network operators

    Get PDF
    Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 157-163).This thesis analyzes and discusses the effectiveness of social efforts to achieve collective action amongst Internet network operators in order to manage the growth of the Internet routing table. The size and rate of growth of the Internet routing table is an acknowledged challenge impeding the scalability of our BGP interdomain routing architecture. While most of the work towards a solution to this problem has focused on architectural improvements, an effort launched in the 1990s called the CIDR Report attempts to incentivize route aggregation using social forces and norms in the Internet operator community. This thesis analyzes the behavior of Internet network operators in response to the CIDR Report from 1997 to 2011 to determine whether the Report was effective in achieving this goal. While it is difficult to causally attribute aggregation behavior to appearance on the CIDR report, there is a trend for networks to improve their prefix aggregation following an appearance on the CIDR Report compared to untreated networks. This suggests that the CIDR Report did affect network aggregation behavior, although the routing table continued to grow. This aggregation improvement is most prevalent early in the study period and becomes less apparent as time goes on. Potential causes of the apparent change in efficacy of the Report are discussed and examined using Ostrom's Common Pool Resource framework. The thesis then concludes with a discussion of options for mitigating routing table growth, including the continued use of community forces to better manage the Internet routing table.by Stephen Robert Woodrow.S.M.S.M.in Technology and Polic

    Sequential Aggregate Signatures with Lazy Verification from Trapdoor Permutations

    Get PDF
    Sequential aggregate signature schemes allow n signers, in order, to sign a message each, at a lower total cost than the cost of n individual signatures. We present a sequential aggregate signature scheme based on trapdoor permutations (e.g., RSA). Unlike prior such proposals, our scheme does not require a signer to retrieve the keys of other signers and verify the aggregate-so-far before adding its own signature. Indeed, we do not even require a signer to know the public keys of other signers! Moreover, for applications that require signers to verify the aggregate anyway, our schemes support lazy verification: a signer can add its own signature to an unverified aggregate and forward it along immediately, postponing verification until load permits or the necessary public keys are obtained. This is especially important for applications where signers must access a large, secure, and current cache of public keys in order to verify messages. The price we pay is that our signature grows slightly with the number of signers. We report a technical analysis of our scheme (which is provably secure in the random oracle model), a detailed implementation-level specification, and implementation results based on RSA and OpenSSL. To evaluate the performance of our scheme, we focus on the target application of BGPsec (formerly known as Secure BGP), a protocol designed for securing the global Internet routing system. There is a particular need for lazy verification with BGPsec, since it is run on routers that must process signatures extremely quickly, while being able to access tens of thousands of public keys. We compare our scheme to the algorithms currently proposed for use in BGPsec, and find that our signatures are considerably shorter nonaggregate RSA (with the same sign and verify times) and have an order of magnitude faster verification than nonaggregate ECDSA, although ECDSA has shorter signatures when the number of signers is small

    Effective Wide-Area Network Performance Monitoring and Diagnosis from End Systems.

    Full text link
    The quality of all network application services running on today’s Internet heavily depends on the performance assurance offered by the Internet Service Providers (ISPs). Large network providers inside the core of the Internet are instrumental in determining the network properties of their transit services due to their wide-area coverage, especially in the presence of the increasingly deployed real-time sensitive network applications. The end-to-end performance of distributed applications and network services are susceptible to network disruptions in ISP networks. Given the scale and complexity of the Internet, failures and performance problems can occur in different ISP networks. It is important to efficiently identify and proactively respond to potential problems to prevent large damage. Existing work to monitor and diagnose network disruptions are ISP-centric, which relying on each ISP to set up monitors and diagnose within its network. This approach is limited as ISPs are unwilling to revealing such data to the public. My dissertation research developed a light-weight active monitoring system to monitor, diagnose and react to network disruptions by purely using end hosts, which can help customers assess the compliance of their service-level agreements (SLAs). This thesis studies research problems from three indispensable aspects: efficient monitoring, accurate diagnosis, and effective mitigation. This is an essential step towards accountability and fairness on the Internet. To fully understand the limitation of relying on ISP data, this thesis first studies and demonstrates the monitor selection’s great impact on the monitoring quality and the interpretation of the results. Motivated by the limitation of ISP-centric approach, this thesis demonstrates two techniques to diagnose two types of finegrained causes accurately and scalably by exploring information across routing and data planes, as well as sharing information among multiple locations collaboratively. Finally, we demonstrate usefulness of the monitoring and diagnosis results with two mitigation applications. The first application is short-term prevention of avoiding choosing the problematic route by exploring the predictability from history. The second application is to scalably compare multiple ISPs across four important performance metrics, namely reachability, loss rate, latency, and path diversity completely from end systems without any ISP cooperation.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64770/1/wingying_1.pd

    A Deep Learning-based Approach to Identifying and Mitigating Network Attacks Within SDN Environments Using Non-standard Data Sources

    Get PDF
    Modern society is increasingly dependent on computer networks, which are essential to delivering an increasing number of key services. With this increasing dependence, comes a corresponding increase in global traffic and users. One of the tools administrators are using to deal with this growth is Software Defined Networking (SDN). SDN changes the traditional distributed networking design to a more programmable centralised solution, based around the SDN controller. This allows administrators to respond more quickly to changing network conditions. However, this change in paradigm, along with the growing use of encryption can cause other issues. For many years, security administrators have used techniques such as deep packet inspection and signature analysis to detect malicious activity. These methods are becoming less common as artificial intelligence (AI) and deep learning technologies mature. AI and deep learning have advantages in being able to cope with 0-day attacks and being able to detect malicious activity despite the use of encryption and obfuscation techniques. However, SDN reduces the volume of data that is available for analysis with these machine learning techniques. Rather than packet information, SDN relies on flows, which are abstract representations of network activity. Security researchers have been slow to move to this new method of networking, in part because of this reduction in data, however doing so could have advantages in responding quickly to malicious activity. This research project seeks to provide a way to reconcile the contradiction apparent, by building a deep learning model that can achieve comparable results to other state-of-the-art models, while using 70% fewer features. This is achieved through the creation of new data from logs, as well as creation of a new risk-based sampling method to prioritise suspect flows for analysis, which can successfully prioritise over 90% of malicious flows from leading datasets. Additionally, provided is a mitigation method that can work with a SDN solution to automatically mitigate attacks after they are found, showcasing the advantages of closer integration with SDN

    Efficient Passive Clustering and Gateways selection MANETs

    Get PDF
    Passive clustering does not employ control packets to collect topological information in ad hoc networks. In our proposal, we avoid making frequent changes in cluster architecture due to repeated election and re-election of cluster heads and gateways. Our primary objective has been to make Passive Clustering more practical by employing optimal number of gateways and reduce the number of rebroadcast packets

    UMSL Bulletin 2017-2018

    Get PDF
    The University Bulletin/Course Catalog 2017-2018 Edition.https://irl.umsl.edu/bulletin/1081/thumbnail.jp

    UMSL Bulletin 2015-2016

    Get PDF
    https://irl.umsl.edu/bulletin/1000/thumbnail.jp
    corecore