104 research outputs found
Prelude: Ensuring Inter-Domain Loop-Freedom in~SDN-Enabled Networks
Software-Defined-eXchanges (SDXes) promise to tackle the timely quest of
bringing improving the inter-domain routing ecosystem through SDN deployment.
Yet, the naive deployment of SDN on the Internet raises concerns about the
correctness of the inter-domain data-plane. By allowing operators to deflect
traffic from the default BGP route, SDN policies are susceptible of creating
permanent forwarding loops invisible to the control-plane.
In this paper, we propose a system, called Prelude, for detecting SDN-induced
forwarding loops between SDXes with high accuracy without leaking the private
routing information of network operators. To achieve this, we leverage Secure
Multi-Party Computation (SMPC) techniques to build a novel and general
privacy-preserving primitive that detects whether any subset of SDN rules might
affect the same portion of traffic without learning anything about those rules.
We then leverage that primitive as the main building block of a distributed
system tailored to detect forwarding loops among any set of SDXes. We leverage
the particular nature of SDXes to further improve the efficiency of our SMPC
solution.
The number of valid SDN rules, i.e., not creating loops, rejected by our
solution is 100x lower than previous privacy-preserving solutions, and also
provides better privacy guarantees. Furthermore, our solution naturally
provides network operators with some hindsight on the cost of the deflected
paths
Automated Formal Analysis of Internet Routing Configurations
Today\u27s Internet interdomain routing protocol, the Border Gateway
Protocol (BGP), is increasingly complicated and fragile due to policy
misconfigurations by individual autonomous systems (ASes). To create
provably correct networks, the past twenty years have witnessed, among
many other efforts, advances in formal network modeling, system
verification and testing, and point solutions for network management
by formal reasoning. On the conceptual side, the formal models
usually abstract away low-level details, specifying what are the
correct functionalities but not how to achieve them. On the practical
side, system verification of existing networked systems is generally
hard, and system testing or simulation provide limited formal
guarantees. This is known as a long standing challenge in network
practice --- formal reasoning is decoupled from actual implementation.
This thesis seeks to bridge formal reasoning and actual network
implementation in the setting of the Border Gateway Protocol (BGP), by
developing the Formally Verifiable Routing (FVR) toolkit that
combines formal methods and programming language techniques. Starting
from the formal model, FVR automates verification of routing
models and the synthesis of faithful implementations that
carries the correctness property. Conversely, starting from large
real-world BGP systems with arbitrary policy configurations,
automates the analysis of Internet routing configurations,
and also includes a novel network reduction technique that
scales up existing techniques for automated analysis. By
developing the above formal theories and tools, this thesis aims to
help network operators to create and manage BGP systems with
correctness guarantee
Analyzing BGP Instances in Maude
Analyzing Border Gateway Protocol (BGP) instances is a crucial stepin the design and implementation of safe BGP systems. Today, the analysis is amanual and tedious process. Researchers study the instances by manually constructingexecution sequences, hoping to either identify an oscillation or showthat the instance is safe by exhaustively examining all possible sequences. Wepropose to automate the analysis by using Maude, a tool based on rewriting logic.We have developed a library specifying a generalized path vector protocol, andmethods to instantiate the library with customized routing policies. Protocols canbe analyzed automatically by Maude, once users provide specifications of thenetwork topology and routing policies. Using our Maude library, protocols orpolicies can be easily specified and checked for problems. To validate our approach,we performed safety analysis of well-known BGP instances and actualrouting configurations
The BGP Visibility Toolkit: detecting anomalous internet routing behavior
In this paper, we propose the BGP Visibility Toolkit, a system for detecting and analyzing anomalous behavior in the Internet. We show that interdomain prefix visibility can be used to single out cases of erroneous demeanors resulting from misconfiguration or bogus routing policies. The implementation of routing policies with BGP is a complicated process, involving fine-tuning operations and interactions with the policies of the other active ASes. Network operators might end up with faulty configurations or unintended routing policies that prevent the success of their strategies and impact their revenues. As part of the Visibility Toolkit, we propose the BGP Visibility Scanner, a tool which identifies limited visibility prefixes in the Internet. The tool enables operators to provide feedback on the expected visibility status of prefixes. We build a unique set of ground-truth prefixes qualified by their ASes as intended or unintended to have limited visibility. Using a machine learning algorithm, we train on this unique dataset an alarm system that separates with 95% accuracy the prefixes with unintended limited visibility. Hence, we find that visibility features are generally powerful to detect prefixes which are suffering from inadvertent effects of routing policies. Limited visibility could render a whole prefix globally unreachable. This points towards a serious problem, as limited reachability of a non-negligible set of prefixes undermines the global connectivity of the Internet. We thus verify the correlation between global visibility and global connectivity of prefixes.This work was sup-ported in part by the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant 317647 (Leone)
Proactive techniques for correct and predictable Internet routing
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 185-193).The Internet is composed of thousands of autonomous, competing networks that exchange reachability information using an interdomain routing protocol. Network operators must continually reconfigure the routing protocols to realize various economic and performance goals. Unfortunately, there is no systematic way to predict how the configuration will affect the behavior of the routing protocol or to determine whether the routing protocol will operate correctly at all. This dissertation develops techniques to reason about the dynamic behavior of Internet routing, based on static analysis of the router configurations, before the protocol ever runs on a live network. Interdomain routing offers each independent network tremendous flexibility in configuring the routing protocols to accomplish various economic and performance tasks. Routing configurations are complex, and writing them is similar to writing a distributed program; the (unavoidable) consequence of configuration complexity is the potential for incorrect and unpredictable behavior. These mistakes and unintended interactions lead to routing faults, which disrupt end-to-end connectivity. Network operators writing configurations make mistakes; they may also specify policies that interact in unexpected ways with policies in other networks.(cont.) To avoid disrupting network connectivity and degrading performance, operators would benefit from being able to determine the effects of configuration changes before deploying them on a live network; unfortunately, the status quo provides them no opportunity to do so. This dissertation develops the techniques to achieve this goal of proactively ensuring correct and predictable Internet routing. The first challenge in guaranteeing correct and predictable behavior from a routing protocol is defining a specification for correct behavior. We identify three important aspects of correctness-path visibility, route validity, and safety-and develop proactive techniques for guaranteeing that these properties hold. Path visibility states that the protocol disseminates information about paths in the topology; route validity says that this information actually corresponds to those paths; safety says that the protocol ultimately converges to a stable outcome, implying that routing updates actually correspond to topological changes. Armed with this correctness specification, we tackle the second challenge: analyzing routing protocol configurations that may be distributed across hundreds of routers.(cont.) We develop techniques to check whether a routing protocol satisfies the correctness specification within a single independently operated network. We find that much of the specification can be checked with static configuration analysis alone. We present examples of real-world routing faults and propose a systematic framework to classify, detect, correct, and prevent them. We describe the design and implementation of rcc ("router configuration checker"), a tool that uses static configuration analysis to enable network operators to debug configurations before deploying them in an operational network. We have used rcc to detect faults in 17 different networks, including several nationwide Internet service providers (ISPs). To date, rcc has been downloaded by over seventy network operators. A critical aspect of guaranteeing correct and predictable Internet routing is ensuring that the interactions of the configurations across multiple networks do not violate the correctness specification. Guaranteeing safety is challenging because each network sets its policies independently, and these policies may conflict. Using a formal model of today's Internet routing protocol, we derive conditions to guarantee that unintended policy interactions will never cause the routing protocol to oscillate.(cont.) This dissertation also takes steps to make Internet routing more predictable. We present algorithms that help network operators predict how a set of distributed router configurations within a single network will affect the flow of traffic through that network. We describe a tool based on these algorithms that exploits the unique characteristics of routing data to reduce computational overhead. Using data from a large ISP, we show that this tool correctly computes BGP routing decisions and has a running time that is acceptable for many tasks, such as traffic engineering and capacity planning.by Nicholas Greer Feamster.Ph.D
Security analysis of network neighbors
Tese de mestrado em Segurança Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2010O presente trabalho aborda um problema comum a muitos dos actuais fornecedores de serviços Internet (ISPs): mitigação eficiente de tráfego malicioso na sua rede. Este tráfego indesejado impõe um desperdÃcio de recursos de rede o que leva a uma consequente degradação da qualidade de serviço. Cria também um ambiente inseguro para os clientes, minando o potencial oferecido pela Internet e abrindo caminho para actividades criminosas graves. Algumas das principais condicionantes na criação de sistemas capazes de resolver estes problemas são: a enorme quantidade de tráfego a ser analisado, o facto da Internet ser inerentemente anónima e a falta de incentivo para os operadores de redes de trânsito em bloquear este tipo de tráfego.
No âmbito de um ISP de média escala, este trabalho concentra-se em três áreas principais: origens de tráfego malicioso, classificação de segurança de redes vizinhas ao ISP e polÃticas de
intervenção. Foram colectados dados de rede considerando, determinados tipos de tráfego malicioso: varrimento de endereços e inundação de fluxos de ligações; assim como informação de acessibilidades rede: mensagens de actualização de BGP disponibilizadas pelo RIPE Routing Information Service. Analisámos o tráfego malicioso em busca de padrões de rede, o que nos permitiu compreender que é maioritariamente originário de um subconjunto muito pequeno de ASes na Internet. No âmbito de um ISP e de acordo com um conjunto de métricas de segurança, definimos uma expressão de correlação para quantificar os riscos de segurança associados a conexões com redes vizinhas, a qual denominámos Risk Score. Finalmente, propusemos técnicas para concretização das tarefas de rede necessárias à redução de tráfego malicioso de forma eficiente, se possÃvel em cooperação com redes vizinhas / ASes.
Não temos conhecimento de qualquer publicação existente que correlacione as caracterÃsticas de tráfego malicioso de varrimento de endereços e inundação de fluxos de ligações, com informação de acessibilidades de rede no âmbito de um ISP, de forma a classificar a segurança das vizinhanças de rede, com o propósito de decidir filtrar o tráfego de prefixos especÃficos de um AS ou bloquear todo o tráfego proveniente de um AS.
Acreditamos que os resultados apresentados neste trabalho podem ser aplicados imediatamente em cenários reais, permitindo criar ambientes de rede mais seguros e escaláveis, desta forma melhorando as condições de rede necessárias ao desenvolvimento de novos serviços.This thesis addresses a common issue to many of current Internet Service Providers (ISPs): efficient mitigation of malicious traffic flowing through their network. This unwanted traffic imposes a waste of network resources, leading to a degradation of quality of service. It also creates an unsafe environment for users, therefore mining the Internet potential and opening way for severe criminal activity. Some of the main constraints of creating systems that may tackle these problems are the enormous amount of traffic to be analyzed, the fact that the Internet is inherently untraceable and the lack of incentive for transit networks to block this type of traffic.
Under the scope of a mid scale ISP, this thesis focuses on three main areas: the origins of malicious traffic, security classification of ISP neighbors and intervention policies.
We collected network data from particular types of malicious traffic: address scans and flow floods; and network reachability information: BGP update messages from RIPE Routing Information Service (RIS). We analyzed the malicious traffic looking for network patterns, which allowed us to understand that most of it originates from a very small subset of Internet ASes. We defined a correlation expression to quantify the security risks of neighbor connections within an ISP scope according to a set of security metrics that we named Risk Score. We finally proposed techniques to implement the network tasks required to mitigate malicious traffic efficiently, if possible in cooperation with other neighbors/ASes.
We are not aware of any work been done that correlates the malicious traffic characteristics of
address scans and flow flood attacks, with network reachability information of an ISP network, to classify the security of neighbor connections in order to decide to filter traffic from specific prefixes of an AS, or to block all traffic from an AS.
It is our belief, the findings presented in this thesis can be immediately applied to real world scenarios, enabling more secure and scalable network environments, therefore opening way for better deployment environments of new services
Patterns and Interactions in Network Security
Networks play a central role in cyber-security: networks deliver security
attacks, suffer from them, defend against them, and sometimes even cause them.
This article is a concise tutorial on the large subject of networks and
security, written for all those interested in networking, whether their
specialty is security or not. To achieve this goal, we derive our focus and
organization from two perspectives. The first perspective is that, although
mechanisms for network security are extremely diverse, they are all instances
of a few patterns. Consequently, after a pragmatic classification of security
attacks, the main sections of the tutorial cover the four patterns for
providing network security, of which the familiar three are cryptographic
protocols, packet filtering, and dynamic resource allocation. Although
cryptographic protocols hide the data contents of packets, they cannot hide
packet headers. When users need to hide packet headers from adversaries, which
may include the network from which they are receiving service, they must resort
to the pattern of compound sessions and overlays. The second perspective comes
from the observation that security mechanisms interact in important ways, with
each other and with other aspects of networking, so each pattern includes a
discussion of its interactions.Comment: 63 pages, 28 figures, 56 reference
Leveraging Conventional Internet Routing Protocol Behavior to Defeat DDoS and Adverse Networking Conditions
The Internet is a cornerstone of modern society. Yet increasingly devastating attacks against the Internet threaten to undermine the Internet\u27s success at connecting the unconnected. Of all the adversarial campaigns waged against the Internet and the organizations that rely on it, distributed denial of service, or DDoS, tops the list of the most volatile attacks. In recent years, DDoS attacks have been responsible for large swaths of the Internet blacking out, while other attacks have completely overwhelmed key Internet services and websites. Core to the Internet\u27s functionality is the way in which traffic on the Internet gets from one destination to another. The set of rules, or protocol, that defines the way traffic travels the Internet is known as the Border Gateway Protocol, or BGP, the de facto routing protocol on the Internet. Advanced adversaries often target the most used portions of the Internet by flooding the routes benign traffic takes with malicious traffic designed to cause widespread traffic loss to targeted end users and regions. This dissertation focuses on examining the following thesis statement. Rather than seek to redefine the way the Internet works to combat advanced DDoS attacks, we can leverage conventional Internet routing behavior to mitigate modern distributed denial of service attacks.
The research in this work breaks down into a single arc with three independent, but connected thrusts, which demonstrate that the aforementioned thesis is possible, practical, and useful. The first thrust demonstrates that this thesis is possible by building and evaluating Nyx, a system that can protect Internet networks from DDoS using BGP, without an Internet redesign and without cooperation from other networks. This work reveals that Nyx is effective in simulation for protecting Internet networks and end users from the impact of devastating DDoS. The second thrust examines the real-world practicality of Nyx, as well as other systems which rely on real-world BGP behavior. Through a comprehensive set of real-world Internet routing experiments, this second thrust confirms that Nyx works effectively in practice beyond simulation as well as revealing novel insights about the effectiveness of other Internet security defensive and offensive systems. We then follow these experiments by re-evaluating Nyx under the real-world routing constraints we discovered. The third thrust explores the usefulness of Nyx for mitigating DDoS against a crucial industry sector, power generation, by exposing the latent vulnerability of the U.S. power grid to DDoS and how a system such as Nyx can protect electric power utilities. This final thrust finds that the current set of exposed U.S. power facilities are widely vulnerable to DDoS that could induce blackouts, and that Nyx can be leveraged to reduce the impact of these targeted DDoS attacks
A system for the detection of limited visibility in BGP
Mención Internacional en el tÃtulo de doctorThe performance of the global routing system is vital to thousands of entities operating
the Autonomous Systems (ASes) which make up the Internet. The Border Gateway
Protocol (BGP) is currently responsible for the exchange of reachability information and
the selection of paths according to their specified routing policies. BGP thus enables
traffic to flow from any point to another connected to the Internet. The manner traffic
flows if often influenced by entities in the Internet according to their preferences. The
latter are implemented in the form of routing policies by tweaking BGP configurations.
Routing policies are usually complex and aim to achieve a myriad goals, including technical,
economic and political purposes. Additionally, individual network managers need to
permanently adapt to the interdomain routing changes and, by engineering the Internet
traffic, optimize the use of their network.
Despite the flexibility offered, the implementation of routing policies is a complicated
process in itself, involving fine-tuning operations. Thus, it is an error-prone task and
operators might end up with faulty configurations that impact the efficacy of their strategies
or, more importantly, their revenues. Withal, even when correctly defining legitimate
routing policies, unforeseen interactions between ASes have been observed to cause important
disruptions that affect the global routing system. The main reason behind this
resides in the fact that the actual inter-domain routing is the result of the interplay of
many routing policies from ASes across the Internet, possibly bringing about a different
outcome than the one expected.
In this thesis, we perform an extensive analysis of the intricacies emerging from the
complex netting of routing policies at the interdomain level, in the context of the current
operational status of the Internet. Abundant implications on the way traffic flows in
the Internet arise from the convolution of routing policies at a global scale, at times
resulting in ASes using suboptimal ill-favored paths or in the undetected propagation of configuration errors in routing system. We argue here that monitoring prefix visibility
at the interdomain level can be used to detect cases of faulty configurations or backfired
routing policies, which disrupt the functionality of the routing system. We show that the
lack of global prefix visibility can offer early warning signs for anomalous events which,
despite their impact, often remain hidden from state of the art tools. Additionally, we show that such unintended Internet behavior not only degrades the efficacy of the routing
policies implemented by operators, causing their traffic to follow ill-favored paths, but
can also point out problems in the global connectivity of prefixes.
We further observe that majority of prefixes suffering from limited visibility at the
interdomain level is a set of more-specific prefixes, often used by network operators to
fulfill binding traffic engineering needs. One important task achieved through the use
of routing policies for traffic engineering is the control and optimization of the routing
function in order to allow the ASes to engineer the incoming traffic. The advertisement
of more-specific prefixes, also known as prefix deaggregation, provides network operators
with a fine-grained method to control the interdomain ingress traffic, given that the
longest-prefix match rule over-rides any other routing policy applied to the covering lessspecific
prefixes.
Nevertheless, however efficient, this traffic engineering tool comes with a cost, which
is usually externalized to the entire Internet community. Prefix deaggregation is a known
reason for the artificial inflation of the BGP routing table, which can further affect the
scalability of the global routing system. Looking past the main motivation for deploying
deaggregation in the first place, we identify and analyze here the economic impact of
this type of strategy. We propose a general Internet model to analyze the effect that
advertising more-specific prefixes has on the incoming transit traffic burstiness. We show
that deaggregation combined with selective advertisements (further defined as strategic
deaggregation) has a traffic stabilization side-effect, which translates into a decrease of the
transit traffic bill. Next, we develop a methodology for Internet Service Providers (ISPs)
to monitor general occurrences of deaggregation within their customer base. Furthermore,
the ISPs can detect selective advertisements of deaggregated prefixes, and thus identify customers which may impact the business of their providers. We apply the proposed
methodology on a complete set of data including routing, traffic, topological and billing
information provided by an operational ISP and we discuss the obtained results.Programa Oficial de Doctorado en IngenierÃa TelemáticaPresidente: Arturo Azcorra Saloña.- Secretario: Steffano Vissichio.- Vocal: Kc. Claff
- …