120 research outputs found

    Analysis of Routing Worm Infection Rates on an IPV4 Network

    Get PDF
    Malicious logic, specifically worms, has caused monetary expenditure problems to network users in the past. Worms, like Slammer and Code Red, have infected thousands of systems and brought the Internet to a standstill. This research examines the ability of the original Slammer worm, the Slammer based routing worm proposed by Zou et al, and a new Single Slash Eight (SSE) routing worm proposed by this research to infect vulnerable systems within a given address space. This research investigates the Slammer worm\u27s ability to generate a uniform random IP addresses in a given address space. Finally, a comparison of the speed increase from computing systems available today versus those in use during the original Slammer release is performed. This research finds that the both the Slammer based routing worm and the SSE routing worm are faster than the original Slammer. The random number generator of the original Slammer worm does generate a statistically uniform distribution of addresses within the range under test. Further, this research shows that despite the previous research into the speed of worm propagation, there is a large void in testing worms on the systems available today that need to be investigated. The speed of the computing systems that the worms operated on in the past were more than three times slower than today\u27s systems. As the speed of computer systems continue to grow, the speed of worm propagation should increase with it as their scan rates directly relate to their infection rate. As such, the immunity of the future IPv6 network, from scanning worms may need to be reexamined

    Source-specific routing

    Get PDF
    Source-specific routing (not to be confused with source routing) is a routing technique where routing decisions depend on both the source and the destination address of a packet. Source-specific routing solves some difficult problems related to multihoming, notably in edge networks, and is therefore a useful addition to the multihoming toolbox. In this paper, we describe the semantics of source-specific packet forwarding, and describe the design and implementation of a source-specific extension to the Babel routing protocol as well as its implementation - to our knowledge, the first complete implementation of a source-specific dynamic routing protocol, including a disambiguation algorithm that makes our implementation work over widely available networking APIs. We further discuss interoperability between ordinary next-hop and source-specific dynamic routing protocols. Our implementation has seen a moderate amount of deployment, notably as a testbed for the IETF Homenet working group

    Convergence optimization in EVPN VPLS

    Get PDF
    The present disclosure relates to the convergence optimization in EVPN VPLS. In particular, the present disclosure optimizes convergence when there is a failure of a PE-CE link in a multi-home site or a PE failure in a multi-home site. Conventional approaches for convergence range on the order of hundreds of milliseconds to seconds. The present disclosure includes moving the detection of failure event closer to the point of failure and utilizing local repair mechanisms, so that traffic outage is limited and reduced to a good extent. Also, the present disclosure includes the usage of Anycast service label bounded to the ES ensures direct forwarding of traffic to the destination CE without being dropped

    Automatic provisioning in multi-domain software defined networking

    Get PDF
    Multi-domain Software Defined Networking (SDN) is the extension of the SDN paradigm to multi-domain networking and the interconnection of different administrative domains. By utilising SDN in the core telecommunication networks, benefits are found including improved traffic flow control, fast route updates and the potential for routing centralisation across domains. The Border Gateway Protocol (BGP) was designed three decades ago, and efforts to redesign interdomain routing that would include a replacement or upgrade to the existing BGP have yet to be realised. For the near real-time flow control provided by SDN, the domain boundary presents a challenge that is difficult to overcome when utilising existing protocols. Replacing the existing gateway mechanism, that provides routing updates between the different administrative domains, with a multi-domain centralised SDN-based solution may not be supported by the network operators, so it is a challenge to identify an approach that works within this constraint. In this research, BGP was studied and selected as the inter-domain SDN communication protocol, and it was used as the baseline protocol for a novel framework for automatic multi-domain SDN provisioning. The framework utilises the BGP UPDATE message with Communities and Extended Communities as the attributes for message exchange. A new application called Inter-Domain Provisioning of Routing Policy in ONOS (INDOPRONOS), for the framework implementation, was developed and tested. This application was built as an ONOS controller application, which collaborated with the existing ONOS SDN-IP application. The framework implementation was tested to verify the information exchange mechanism between domains, and it successfully carried out the provisioning actions that are triggered by that exchanged information. The test results show that the framework was successfully verified. The information carried inside the two attributes can successfully be transferred between domains, and it can be used to trigger INDOPRONOS to create and install new alternative intents to override the default intents of the ONOS controller. The intents installed by INDOPRONOS immediately change the route of the existing connection, which demonstrated that the correct request sent from the other domain, can carry out a modification in network settings inside a domain. Finally, the framework was tested using a bandwidth on demand use case. In this use case, a customer network administrator can immediately change the network service bandwidth which was provided by the service provider, without any intervention from the service provider administrator, based on an agreed-predefined configuration setting. This ability will provide benefits for both customer and service provider, in terms of customer satisfaction and network operations efficiency

    ROVER: a DNS-based method to detect and prevent IP hijacks

    Get PDF
    2013 Fall.Includes bibliographical references.The Border Gateway Protocol (BGP) is critical to the global internet infrastructure. Unfortunately BGP routing was designed with limited regard for security. As a result, IP route hijacking has been observed for more than 16 years. Well known incidents include a 2008 hijack of YouTube, loss of connectivity for Australia in February 2012, and an event that partially crippled Google in November 2012. Concern has been escalating as critical national infrastructure is reliant on a secure foundation for the Internet. Disruptions to military, banking, utilities, industry, and commerce can be catastrophic. In this dissertation we propose ROVER (Route Origin VERification System), a novel and practical solution for detecting and preventing origin and sub-prefix hijacks. ROVER exploits the reverse DNS for storing route origin data and provides a fail-safe, best effort approach to authentication. This approach can be used with a variety of operational models including fully dynamic in-line BGP filtering, periodically updated authenticated route filters, and real-time notifications for network operators. Our thesis is that ROVER systems can be deployed by a small number of institutions in an incremental fashion and still effectively thwart origin and sub-prefix IP hijacking despite non-participation by the majority of Autonomous System owners. We then present research results supporting this statement. We evaluate the effectiveness of ROVER using simulations on an Internet scale topology as well as with tests on real operational systems. Analyses include a study of IP hijack propagation patterns, effectiveness of various deployment models, critical mass requirements, and an examination of ROVER resilience and scalability

    k-dense Communities in the Internet AS-Level Topology

    Get PDF
    Extracting a set of well connected subgraphs as com- munities from the Internet AS-level topology graph is crucially important for assessing the performance of protocols and routing algorithms, for designing ecient networks, and for evaluating the impact of failures. A huge number of community extraction methods have been proposed in the literature, among which the k-core decomposition and the k-clique community extraction methods. The former method is computationally e- cient, but it only discovers coarse-grained and loosely connected communities. On the other hand, k-clique can extract ne-grained and tightly connected communities, but is NP hard and therefore useless for analyzing the Internet AS-level topology graph. In the paper we inves- tigate the Internet structure by exploiting an ecient algorithm for extracting k-dense communities, where a k-clique community implies a k-dense community, which in turn implies a k-core community. The paper provides two innovative contributions. The rst is the application of the k-dense method to the Internet AS-level topology graph - obtained from the CAIDA, DIMES and IRL datasets - to identify well- connected communities and to analyze how these are connected to the rest of the graph. The second contribution relates to the study of the most well-connected communities with the support of two additional datasets: a geographical dataset (which lists, for each AS, the countries in which it has at least one geographical location) and the IXP dataset (which maintains, for each IXP, its geographical position and the list of its participants). We found that the k-max- dense community holds a central position in the Internet AS-level topology graph structure since its 101 ASs (less than the 0.3% of Internet ASs) are involved in more than 39% of all Internet connections. We also found that those ASs are connected to at least one IXP and have at least one geographical location in Europe (only 70.3% of them have at least one additional geographical location outside Europe)

    IP and ATM integration: A New paradigm in multi-service internetworking

    Get PDF
    ATM is a widespread technology adopted by many to support advanced data communication, in particular efficient Internet services provision. The expected challenges of multimedia communication together with the increasing massive utilization of IP-based applications urgently require redesign of networking solutions in terms of both new functionalities and enhanced performance. However, the networking context is affected by so many changes, and to some extent chaotic growth, that any approach based on a structured and complex top-down architecture is unlikely to be applicable. Instead, an approach based on finding out the best match between realistic service requirements and the pragmatic, intelligent use of technical opportunities made available by the product market seems more appropriate. By following this approach, innovations and improvements can be introduced at different times, not necessarily complying with each other according to a coherent overall design. With the aim of pursuing feasible innovations in the different networking aspects, we look at both IP and ATM internetworking in order to investigating a few of the most crucial topics/ issues related to the IP and ATM integration perspective. This research would also address various means of internetworking the Internet Protocol (IP) and Asynchronous Transfer Mode (ATM) with an objective of identifying the best possible means of delivering Quality of Service (QoS) requirements for multi-service applications, exploiting the meritorious features that IP and ATM have to offer. Although IP and ATM often have been viewed as competitors, their complementary strengths and limitations from a natural alliance that combines the best aspects of both the technologies. For instance, one limitation of ATM networks has been the relatively large gap between the speed of the network paths and the control operations needed to configure those data paths to meet changing user needs. IP\u27s greatest strength, on the other hand, is the inherent flexibility and its capacity to adapt rapidly to changing conditions. These complementary strengths and limitations make it natural to combine IP with ATM to obtain the best that each has to offer. Over time many models and architectures have evolved for IP/ATM internetworking and they have impacted the fundamental thinking in internetworking IP and ATM. These technologies, architectures, models and implementations will be reviewed in greater detail in addressing possible issues in integrating these architectures s in a multi-service, enterprise network. The objective being to make recommendations as to the best means of interworking the two in exploiting the salient features of one another to provide a faster, reliable, scalable, robust, QoS aware network in the most economical manner. How IP will be carried over ATM when a commercial worldwide ATM network is deployed is not addressed and the details of such a network still remain in a state of flux to specify anything concrete. Our research findings culminated with a strong recommendation that the best model to adopt, in light of the impending integrated service requirements of future multi-service environments, is an ATM core with IP at the edges to realize the best of both technologies in delivering QoS guarantees in a seamless manner to any node in the enterprise

    Abstracting network policies

    Get PDF
    Almost every human activity in recent years relies either directly or indirectly on the smooth and efficient operation of the Internet. The Internet is an interconnection of multiple autonomous networks that work based on agreed upon policies between various institutions across the world. The network policies guiding an institution’s computer infrastructure both internally (such as firewall relationships) and externally (such as routing relationships) are developed by a diverse group of lawyers, accountants, network administrators, managers amongst others. Network policies developed by this group of individuals are usually done on a white-board in a graph-like format. It is however the responsibility of network administrators to translate and configure the various network policies that have been agreed upon. The configuration of these network policies are generally done on physical devices such as routers, domain name servers, firewalls and other middle boxes. The manual configuration process of such network policies is known to be tedious, time consuming and prone to human error which can lead to various network anomalies in the configuration commands. In recent years, many research projects and corporate organisations have to some level abstracted the network management process with emphasis on network devices (such as Cisco VIRL) or individual network policies (such as Propane). [Continues.]</div
    • …
    corecore