26 research outputs found

    Seedemu: The Seed Internet Emulator

    Get PDF
    I studied and experimented with the idea of building an emulator for the Internet. While there are various already available options for such a task, none of them takes the emulation of the entire Internet as an important feature in mind. Those emulators and simulators can handle small-scale networks pretty well, but lacks the ability to handle large-size networks, mainly due to: - Not being able to run many nodes, or requires very powerful hardware to do so,- Lacks convenient ways to build a large emulation, and - Lacks reusability: once something is built, it is very hard to re-use them in another emulation I explored, in the context of for-education Internet emulators, different ways to overcome the above limitations. I came up with a framework that enables one to create emulation using code. The framework provides basic components of the Internet. Some examples include routers, servers, networks, Internet exchanges, autonomous systems, and DNS infrastructure. Building emulation with code means it is easy to build emulation with complex topologies since one can make use of the common control structures like loops, subroutines, and functions. The framework exploits the idea of ``layers.\u27\u27 The idea of ``\emph{layers}\u27\u27 can be seen as an analogy of the idea of ``layers\u27\u27 in image processing software, in the sense that each layer contains parts of the image (in this case, part of the emulation), and need to be ``rendered\u27\u27 to obtain the resulting image. There are two types of layers, base layers and service layers. Base layers describe the ``base\u27\u27 of the topologies, like how routers, servers, and networks are connected, how autonomous systems are peered with each other; service layers describe the high-level services on the Internet. Examples of services layers are web servers, DNS servers, ethereum nodes, and botnet nodes. No layers are tied to any other layers, meaning each layer can be individually manipulated, exported, and re-used in another emulation. One can build an entire DNS infrastructure, complete with root DNS, TLD DNS, and deploy it on any base layer, even with vastly different underlying topologies. The result of the rendered layer is a set of data structures that represents the objects in a network emulation, like host, router, and networks. These representations can then be ``compiled\u27\u27 into something that one can execute using a compiler. The main target platform of the framework is Docker. The source of the SEEDEMU project is publicly available on Github: https://github.com/seed-labs/seed-emulator

    Toward Automated Network Management and Operations.

    Full text link
    Network management plays a fundamental role in the operation and well-being of today's networks. Despite the best effort of existing support systems and tools, management operations in large service provider and enterprise networks remain mostly manual. Due to the larger scale of modern networks, more complex network functionalities, and higher network dynamics, human operators are increasingly short-handed. As a result, network misconfigurations are frequent, and can result in violated service-level agreements and degraded user experience. In this dissertation, we develop various tools and systems to understand, automate, augment, and evaluate network management operations. Our thesis is that by introducing formal abstractions, like deterministic finite automata, Petri-Nets and databases, we can build new support systems that systematically capture domain knowledge, automate network management operations, enforce network-wide properties to prevent misconfigurations, and simultaneously reduce manual effort. The theme for our systems is to build a knowledge plane based on the proposed abstractions, allowing network-wide reasoning and guidance for network operations. More importantly, the proposed systems require no modification to the existing Internet infrastructure and network devices, simplifying adoption. We show that our systems improve both timeliness and correctness in performing realistic and large-scale network operations. Finally, to address the current limitations and difficulty of evaluating novel network management systems, we have designed a distributed network testing platform that relies on network and device virtualization to provide realistic environments and isolation to production networks.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78837/1/chenxu_1.pd

    Implementing Soak Testing for an Access Network Solution

    Get PDF
    Tietoliikennelaitteiden ohjelmistojen toiminnalle asetetaan erittäin kovat laatuvaatimukset. Operaattoreilla on yleensä asiakkaiden kanssa SLA sopimukset, joiden rikkomisesta operaattorit saattavat joutua maksamaan suuriakin korvauksia. Lisäksi jokainen hetki, jolloin laite ei ole toimintavalmis, tuottaa operaattorille kustannuksia menetettyjen tulojen muodossa. Tämän vuoksi on erittäin tärkeää, että laitteet ovat jatkuvasti toimintakunnossa eikä palvelukatkoksia tule. Tämän diplomityön tavoitteena oli kehittää automatisoitu pitkän ajan testausjärjestelmä IP/MPLS pohjaiselle Tellabs 8600 reititinperheelle. Testattava järjestelmä koostuu useista verkkoelementeistä sekä graafisesta Tellabs 8000 verkonhallintajärjestelmästä. Tämän testausympäristön tavoitteena on paljastaa ongelmia, jotka eivät tule esiin normaalissa toiminnallisessa tai regressiotestauksessa vaan vaativat ilmaantuakseen pidempää ajoaikaa tai useita toistoja. Työssä kehitettiin kehys sille, kuinka testausympäristössä voidaan suorittaa automaattisesti erilaisia operaatioita sekä voidaan ohjelmallisesti havaita mahdollisia ongelmatilanteita. Testausjärjestelmä toteutettiin onnistuneesti ja täyttää sille asetetut tavoitteet. Testausjärjestelmä on otettu käyttöön Tellabsin systeemitestauksessa ja on käyttöönoton jälkeen osoittautunut hyödylliseksi ja tehokkaaksi järjestelmäksi. Systeemitestauksen käyttöön toteutettiin myös toinen täysin identtinen ympäristö.The quality requirements are extremely demanding for telecommunications software. Operators usually have SLA agreements with their customers, and violations to that contract may lead to serious compensations. Furthermore, every moment that equipment or some service is not operating correctly means lost income for the operator. For these reasons, it is extremely important for a telecommunications equipment to continue functioning properly without service affecting breaks. The purpose of this thesis was to design and implement automated soak testing for the IP/MPLS-based Tellabs 8600 router series. The system under test is composed of several network elements and a graphical Tellabs 8000 Network Management System. The purpose of this testing environment is to reveal defects that do not show up immediately in functional or regression testing but may manifest when the system is used for longer periods or operations are executed many times. A framework for automatically operating the test network and detecting problems programmatically was implemented in this thesis. The testing environment was successfully implemented and satisfies the objectives initially set for it. Testing environment has been taken into use in system testing at Tellabs and after deployment has turned out to be useful and effective. Another identical environment was also implemented for the system testing group

    Ethical Hacking Using Penetration Testing

    Get PDF
    This thesis provides details of the hardware architecture and the software scripting, which are employed to demonstrate penetration testing in a laboratory setup. The architecture depicts an organizational computing asset or an environment.¬¬¬ With the increasing number of cyber-attacks throughout the world, the network security is becoming an important issue. This has motivated a large number of “ethical hackers” to indulge and develop methodologies and scripts to defend against the security attacks. As it is too onerous to maintain and monitor attacks on individual hardware and software in an organization, the demand for the new ways to manage security systems invoked the idea of penetration testing. Many research groups have designed algorithms depending on the size, type and purpose of application to secure networks [55]. In this thesis, we create a laboratory setup replicating an organizational infrastructure to study penetration testing on real time server-client atmosphere. To make this possible, we have used Border Gateway Protocol (BGP) as routing protocol as it is widely used in current networks. Moreover, BGP exhibits few vulnerabilities of its own and makes the security assessment more promising. Here, we propose (a) computer based attacks and (b) actual network based attacks including defense mechanisms. The thesis, thus, describes the way penetration testing is accomplished over a desired BGP network. The procedural generation of the packets, exploit, and payloads involve internal and external network attacks. In this thesis, we start with the details of all sub-fields in the stream of penetration testing, including their requirements and outcomes. As an informative and learning research, this thesis discusses the types of attacks over the routers, switches and physical client machines. Our work also deals with the limitations of the implementation of the penetration testing, discussing over the vulnerabilities of the current standards in the technology. Furthermore, we consider the possible methodologies that require attention in order to accomplish most efficient outcomes with the penetration testing. Overall, this work has provided a great learning opportunity in the area of ethical hacking using penetration testing

    Proactive techniques for correct and predictable Internet routing

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 185-193).The Internet is composed of thousands of autonomous, competing networks that exchange reachability information using an interdomain routing protocol. Network operators must continually reconfigure the routing protocols to realize various economic and performance goals. Unfortunately, there is no systematic way to predict how the configuration will affect the behavior of the routing protocol or to determine whether the routing protocol will operate correctly at all. This dissertation develops techniques to reason about the dynamic behavior of Internet routing, based on static analysis of the router configurations, before the protocol ever runs on a live network. Interdomain routing offers each independent network tremendous flexibility in configuring the routing protocols to accomplish various economic and performance tasks. Routing configurations are complex, and writing them is similar to writing a distributed program; the (unavoidable) consequence of configuration complexity is the potential for incorrect and unpredictable behavior. These mistakes and unintended interactions lead to routing faults, which disrupt end-to-end connectivity. Network operators writing configurations make mistakes; they may also specify policies that interact in unexpected ways with policies in other networks.(cont.) To avoid disrupting network connectivity and degrading performance, operators would benefit from being able to determine the effects of configuration changes before deploying them on a live network; unfortunately, the status quo provides them no opportunity to do so. This dissertation develops the techniques to achieve this goal of proactively ensuring correct and predictable Internet routing. The first challenge in guaranteeing correct and predictable behavior from a routing protocol is defining a specification for correct behavior. We identify three important aspects of correctness-path visibility, route validity, and safety-and develop proactive techniques for guaranteeing that these properties hold. Path visibility states that the protocol disseminates information about paths in the topology; route validity says that this information actually corresponds to those paths; safety says that the protocol ultimately converges to a stable outcome, implying that routing updates actually correspond to topological changes. Armed with this correctness specification, we tackle the second challenge: analyzing routing protocol configurations that may be distributed across hundreds of routers.(cont.) We develop techniques to check whether a routing protocol satisfies the correctness specification within a single independently operated network. We find that much of the specification can be checked with static configuration analysis alone. We present examples of real-world routing faults and propose a systematic framework to classify, detect, correct, and prevent them. We describe the design and implementation of rcc ("router configuration checker"), a tool that uses static configuration analysis to enable network operators to debug configurations before deploying them in an operational network. We have used rcc to detect faults in 17 different networks, including several nationwide Internet service providers (ISPs). To date, rcc has been downloaded by over seventy network operators. A critical aspect of guaranteeing correct and predictable Internet routing is ensuring that the interactions of the configurations across multiple networks do not violate the correctness specification. Guaranteeing safety is challenging because each network sets its policies independently, and these policies may conflict. Using a formal model of today's Internet routing protocol, we derive conditions to guarantee that unintended policy interactions will never cause the routing protocol to oscillate.(cont.) This dissertation also takes steps to make Internet routing more predictable. We present algorithms that help network operators predict how a set of distributed router configurations within a single network will affect the flow of traffic through that network. We describe a tool based on these algorithms that exploits the unique characteristics of routing data to reduce computational overhead. Using data from a large ISP, we show that this tool correctly computes BGP routing decisions and has a running time that is acceptable for many tasks, such as traffic engineering and capacity planning.by Nicholas Greer Feamster.Ph.D

    Deliverable DJRA1.3: Tool prototype for creating and stitching multiple network resources for virtual infrastructures

    Get PDF
    This document describes the prototype FEDERICA Slice Tool developed for the virtualization of network elements in FEDERICA and for creating and stitching network resources over this virtual infrastructure. An SNMP-based resource discovery prototype is also introduced as a new functionality to be integrated in the tool.The deliverable also presents aviability study for the use of traffic prioritization in the FEDERICA infrastructure and some network performance measurements on a real slice within FEDERICA.This document reports the final results of JRA1.2 Activity in the development of a tool prototype for creating sets ofvirtual resourcesinFEDERICA.The prototype goal is to simplify and automate part of the work for NOC.The tool may also serve,with different privileges, a FEDERICA user to operate on his/her slice. The tool described here was designed with the objective of providing an interactive application with a graphical interface to operate on resources for the NOC and the end users (researchers). The tool simplify the creation and configuration of resources in a slice and it is a mandatory step to ensure scalability of the NOC effort. It offers an interactive Graphical User Interface that translates the users’ actions to commands in the substrate (networknodesandV-nodes)andslice elements(VirtualMachines).User accounts may be created for the NOC and for researchers, each with specific privileges to enable different sets of capabilities. The NOC account has full access to all the resources in the substrate, while each user’account has full access only to the virtual resources in his/her slice. The tool has been developed using the Java programming language as Open Source code and relies on the open source Globus® Toolkit. Testing has been performed in a laboratory environment and on some FEDERICA substrate equipment (1switch, 2VMwareServers) in their standard configuration. For testing the router, web services and GUI an additional computer was used, using a public IP address.Postprint (published version

    Rethinking Routing and Peering in the era of Vertical Integration of Network Functions

    Get PDF
    Content providers typically control the digital content consumption services and are getting the most revenue by implementing an all-you-can-eat model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering – from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation – one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service

    Methods for revealing and reshaping the African Internet Ecosystem as a case study for developing regions: from isolated networks to a connected continent

    Get PDF
    Mención Internacional en el título de doctorWhile connecting end-users worldwide, the Internet increasingly promotes local development by making challenges much simpler to overcome, regardless of the field in which it is used: governance, economy, education, health, etc. However, African Network Information Centre (AfriNIC), the Regional Internet Registry (RIR) of Africa, is characterized by the lowest Internet penetration: 28.6% as of March 2017 compared to an average of 49.7% worldwide according to the International Telecommunication Union (ITU) estimates [139]. Moreover, end-users experience a poor Quality of Service (QoS) provided at high costs. It is thus of interest to enlarge the Internet footprint in such under-connected regions and determine where the situation can be improved. Along these lines, this doctoral thesis thoroughly inspects, using both active and passive data analysis, the critical aspects of the African Internet ecosystem and outlines the milestones of a methodology that could be adopted for achieving similar purposes in other developing regions. The thesis first presents our efforts to help build measurements infrastructures for alleviating the shortage of a diversified range of Vantage Points (VPs) in the region, as we cannot improve what we can not measure. It then unveils our timely and longitudinal inspection of the African interdomain routing using the enhanced RIPE Atlas measurements infrastructure for filling the lack of knowledge of both IPv4 and IPv6 topologies interconnecting local Internet Service Providers (ISPs). It notably proposes reproducible data analysis techniques suitable for the treatment of any set of similar measurements to infer the behavior of ISPs in the region. The results show a large variety of transit habits, which depend on socio-economic factors such as the language, the currency area, or the geographic location of the country in which the ISP operates. They indicate the prevailing dominance of ISPs based outside Africa for the provision of intracontinental paths, but also shed light on the efforts of stakeholders for traffic localization. Next, the thesis investigates the causes and impacts of congestion in the African IXP substrate, as the prevalence of this endemic phenomenon in local Internet markets may hinder their growth. Towards this end, Ark monitors were deployed at six strategically selected local Internet eXchange Points (IXPs) and used for collecting Time-Sequence Latency Probes (TSLP) measurements during a whole year. The analysis of these datasets reveals no evidence of widespread congestion: only 2.2% of the monitored links experienced noticeable indication of congestion, thus promoting peering. The causes of these events were identified during IXP operator interviews, showing how essential collaboration with stakeholders is to understanding the causes of performance degradations. As part of the Internet Society (ISOC) strategy to allow the Internet community to profile the IXPs of a particular region and monitor their evolution, a route-collector data analyzer was then developed and afterward, it was deployed and tested in AfriNIC. This open source web platform titled the “African” Route-collectors Data Analyzer (ARDA) provides metrics, which picture in real-time the status of interconnection at different levels, using public routing information available at local route-collectors with a peering viewpoint of the Internet. The results highlight that a small proportion of Autonomous System Numbers (ASNs) assigned by AfriNIC (17 %) are peering in the region, a fraction that remained static from April to September 2017 despite the significant growth of IXPs in some countries. They show how ARDA can help detect the impact of a policy on the IXP substrate and help ISPs worldwide identify new interconnection opportunities in Africa, the targeted region. Since broadening the underlying network is not useful without appropriately provisioned services to exploit it, the thesis then delves into the availability and utilization of the web infrastructure serving the continent. Towards this end, a comprehensive measurement methodology is applied to collect data from various sources. A focus on Google reveals that its content infrastructure in Africa is, indeed, expanding; nevertheless, much of its web content is still served from the United States (US) and Europe, although being the most popular content source in many African countries. Further, the same analysis is repeated across top global and regional websites, showing that even top African websites prefer to host their content abroad. Following that, the primary bottlenecks faced by Content Providers (CPs) in the region such as the lack of peering between the networks hosting our probes and poorly configured DNS resolvers are explored to outline proposals for further ISP and CP deployments. Considering the above, an option to enrich connectivity and incentivize CPs to establish a presence in the region is to interconnect ISPs present at isolated IXPs by creating a distributed IXP layout spanning the continent. In this respect, the thesis finally provides a four-step interconnection scheme, which parameterizes socio-economic, geographical, and political factors using public datasets. It demonstrates that this constrained solution doubles the percentage of continental intra-African paths, reduces their length, and drastically decreases the median of their Round Trip Times (RTTs) as well as RTTs to ASes hosting the top 10 global and top 10 regional Alexa websites. We hope that quantitatively demonstrating the benefits of this framework will incentivize ISPs to intensify peering and CPs to increase their presence, for enabling fast, affordable, and available access at the Internet frontier.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: David Fernández Cambronero.- Secretario: Alberto García Martínez.- Vocal: Cristel Pelsse

    Abstracting network policies

    Get PDF
    Almost every human activity in recent years relies either directly or indirectly on the smooth and efficient operation of the Internet. The Internet is an interconnection of multiple autonomous networks that work based on agreed upon policies between various institutions across the world. The network policies guiding an institution’s computer infrastructure both internally (such as firewall relationships) and externally (such as routing relationships) are developed by a diverse group of lawyers, accountants, network administrators, managers amongst others. Network policies developed by this group of individuals are usually done on a white-board in a graph-like format. It is however the responsibility of network administrators to translate and configure the various network policies that have been agreed upon. The configuration of these network policies are generally done on physical devices such as routers, domain name servers, firewalls and other middle boxes. The manual configuration process of such network policies is known to be tedious, time consuming and prone to human error which can lead to various network anomalies in the configuration commands. In recent years, many research projects and corporate organisations have to some level abstracted the network management process with emphasis on network devices (such as Cisco VIRL) or individual network policies (such as Propane). [Continues.]</div
    corecore