8 research outputs found
Optimization of BGP Convergence and Prefix Security in IP/MPLS Networks
Multi-Protocol Label Switching-based networks are the backbone of the operation of the Internet, that communicates through the use of the Border Gateway Protocol which connects distinct networks, referred to as Autonomous Systems, together. As the technology matures, so does the challenges caused by the extreme growth rate of the Internet. The amount of BGP prefixes required to facilitate such an increase in connectivity introduces multiple new critical issues, such as with the scalability and the security of the aforementioned Border Gateway Protocol.
Illustration of an implementation of an IP/MPLS core transmission network is formed through the introduction of the four main pillars of an Autonomous System: Multi-Protocol Label Switching, Border Gateway Protocol, Open Shortest Path First and the Resource Reservation Protocol. The symbiosis of these technologies is used to introduce the practicalities of operating an IP/MPLS-based ISP network with traffic engineering and fault-resilience at heart.
The first research objective of this thesis is to determine whether the deployment of a new BGP feature, which is referred to as BGP Prefix Independent Convergence (PIC), within AS16086 would be a worthwhile endeavour. This BGP extension aims to reduce the convergence delay of BGP Prefixes inside of an IP/MPLS Core Transmission Network, thus improving the networks resilience against faults.
Simultaneously, the second research objective was to research the available mechanisms considering the protection of BGP Prefixes, such as with the implementation of the Resource Public Key Infrastructure and the Artemis BGP Monitor for proactive and reactive security of BGP prefixes within AS16086.
The future prospective deployment of BGPsec is discussed to form an outlook to the future of IP/MPLS network design. As the trust-based nature of BGP as a protocol has become a distinct vulnerability, thus necessitating the use of various technologies to secure the communications between the Autonomous Systems that form the network to end all networks, the Internet
Sequential Aggregate Signatures with Lazy Verification from Trapdoor Permutations
Sequential aggregate signature schemes allow n signers, in order, to sign a message each, at a lower total cost than the cost of n individual signatures. We present a sequential aggregate signature scheme based on trapdoor permutations (e.g., RSA). Unlike prior such proposals, our scheme does not require a signer to retrieve the keys of other signers and verify the aggregate-so-far before adding its own signature. Indeed, we do not even require a signer to know the public keys of other signers!
Moreover, for applications that require signers to verify the aggregate anyway, our schemes support lazy verification: a signer can add its own signature to an unverified aggregate and forward it along immediately, postponing verification until load permits or the necessary public keys are obtained. This is especially important for applications where signers must access a large, secure, and current cache of public keys in order to verify messages. The price we pay is that our signature grows slightly with the number of signers.
We report a technical analysis of our scheme (which is provably secure in the random oracle model), a detailed implementation-level specification, and implementation results based on RSA and OpenSSL. To evaluate the performance of our scheme, we focus on the target application of BGPsec (formerly known as Secure BGP), a protocol designed for securing the global Internet routing system. There is a particular need for lazy verification with BGPsec, since it is run on routers that must process signatures extremely quickly, while being able to access tens of thousands of public keys. We compare our scheme to the algorithms currently proposed for use in BGPsec, and find that our signatures are considerably shorter nonaggregate RSA (with the same sign and verify times) and have an order of magnitude faster verification than nonaggregate ECDSA, although ECDSA has shorter signatures when the number of signers is small
BGP Security in Partial Deployment: Is the Juice Worth the Squeeze?
As the rollout of secure route origin authentication with the RPKI slowly
gains traction among network operators, there is a push to standardize secure
path validation for BGP (i.e., S*BGP: S-BGP, soBGP, BGPSEC, etc.). Origin
authentication already does much to improve routing security. Moreover, the
transition to S*BGP is expected to be long and slow, with S*BGP coexisting in
"partial deployment" alongside BGP for a long time. We therefore use
theoretical and experimental approach to study the security benefits provided
by partially-deployed S*BGP, vis-a-vis those already provided by origin
authentication. Because routing policies have a profound impact on routing
security, we use a survey of 100 network operators to find the policies that
are likely to be most popular during partial S*BGP deployment. We find that
S*BGP provides only meagre benefits over origin authentication when these
popular policies are used. We also study the security benefits of other routing
policies, provide prescriptive guidelines for partially-deployed S*BGP, and
show how interactions between S*BGP and BGP can introduce new vulnerabilities
into the routing system
A pragmatic approach toward securing inter-domain routing
Internet security poses complex challenges at different levels, where even the basic requirement of availability of Internet connectivity becomes a conundrum sometimes. Recent Internet service disruption events have made the vulnerability of the Internet apparent, and exposed the current limitations of Internet security measures as well. Usually, the main cause of such incidents, even in the presence of the security measures proposed so far, is the unintended or intended exploitation of the loop holes in the protocols that govern the Internet. In this thesis, we focus on the security of two different protocols that were conceived with little or no security mechanisms but play a key role both in the present and the future of the Internet, namely the Border Gateway Protocol (BGP) and the Locator Identifier Separation Protocol (LISP).
The BGP protocol, being the de-facto inter-domain routing protocol in the Internet, plays a crucial role in current communications. Due to lack of any intrinsic security mechanism, it is prone to a number of vulnerabilities that can result in partial paralysis of the Internet. In light of this, numerous security strategies were proposed but none of them were pragmatic enough to be widely accepted and only minor security tweaks have found the pathway to be adopted. Even the recent IETF Secure Inter-Domain Routing (SIDR) Working Group (WG) efforts including, the Resource Public Key Infrastructure (RPKI), Route Origin authorizations (ROAs), and BGP Security (BGPSEC) do not address the policy related security issues, such as Route Leaks (RL). Route leaks occur due to violation of the export routing policies among the Autonomous Systems (ASes). Route leaks not only have the potential to cause large scale Internet service disruptions but can result in traffic hijacking as well. In this part of the thesis, we examine the route leak problem and propose pragmatic security methodologies which a) require no changes to the BGP protocol, b) are neither dependent on third party information nor on third party security infrastructure, and c) are self-beneficial regardless of their adoption by other players. Our main contributions in this part of the thesis include a) a theoretical framework, which, under realistic assumptions, enables a domain to autonomously determine if a particular received route advertisement corresponds to a route leak, and b) three incremental detection techniques, namely Cross-Path (CP), Benign Fool Back (BFB), and Reverse Benign Fool Back (R-BFB). Our strength resides in the fact that these detection techniques solely require the analytical usage of in-house control-plane, data-plane and direct neighbor relationships information. We evaluate the performance of the three proposed route leak detection techniques both through real-time experiments as well as using simulations at large scale. Our results show that the proposed detection techniques achieve high success rates for countering route leaks in different scenarios.
The motivation behind LISP protocol has shifted over time from solving routing scalability issues in the core Internet to a set of vital use cases for which LISP stands as a technology enabler. The IETF's LISP WG has recently started to work toward securing LISP, but the protocol still lacks end-to-end mechanisms for securing the overall registration process on the mapping system ensuring RLOC authorization and EID authorization. As a result LISP is unprotected against different attacks, such as RLOC spoofing, which can cripple even its basic functionality. For that purpose, in this part of the thesis we address the above mentioned issues and propose practical solutions that counter them. Our solutions take advantage of the low technological inertia of the LISP protocol. The changes proposed for the LISP protocol and the utilization of existing security infrastructure in our solutions enable resource authorizations and lay the foundation for the needed end-to-end security
Leveraging Conventional Internet Routing Protocol Behavior to Defeat DDoS and Adverse Networking Conditions
The Internet is a cornerstone of modern society. Yet increasingly devastating attacks against the Internet threaten to undermine the Internet\u27s success at connecting the unconnected. Of all the adversarial campaigns waged against the Internet and the organizations that rely on it, distributed denial of service, or DDoS, tops the list of the most volatile attacks. In recent years, DDoS attacks have been responsible for large swaths of the Internet blacking out, while other attacks have completely overwhelmed key Internet services and websites. Core to the Internet\u27s functionality is the way in which traffic on the Internet gets from one destination to another. The set of rules, or protocol, that defines the way traffic travels the Internet is known as the Border Gateway Protocol, or BGP, the de facto routing protocol on the Internet. Advanced adversaries often target the most used portions of the Internet by flooding the routes benign traffic takes with malicious traffic designed to cause widespread traffic loss to targeted end users and regions. This dissertation focuses on examining the following thesis statement. Rather than seek to redefine the way the Internet works to combat advanced DDoS attacks, we can leverage conventional Internet routing behavior to mitigate modern distributed denial of service attacks.
The research in this work breaks down into a single arc with three independent, but connected thrusts, which demonstrate that the aforementioned thesis is possible, practical, and useful. The first thrust demonstrates that this thesis is possible by building and evaluating Nyx, a system that can protect Internet networks from DDoS using BGP, without an Internet redesign and without cooperation from other networks. This work reveals that Nyx is effective in simulation for protecting Internet networks and end users from the impact of devastating DDoS. The second thrust examines the real-world practicality of Nyx, as well as other systems which rely on real-world BGP behavior. Through a comprehensive set of real-world Internet routing experiments, this second thrust confirms that Nyx works effectively in practice beyond simulation as well as revealing novel insights about the effectiveness of other Internet security defensive and offensive systems. We then follow these experiments by re-evaluating Nyx under the real-world routing constraints we discovered. The third thrust explores the usefulness of Nyx for mitigating DDoS against a crucial industry sector, power generation, by exposing the latent vulnerability of the U.S. power grid to DDoS and how a system such as Nyx can protect electric power utilities. This final thrust finds that the current set of exposed U.S. power facilities are widely vulnerable to DDoS that could induce blackouts, and that Nyx can be leveraged to reduce the impact of these targeted DDoS attacks
On the Adoption Dynamics of Internet Technologies: Models and Case Studies
Today, more than any time in history, our life-styles depend on networked systems,
ranging from power grids to the Internet and social networks. From shopping
online to attending a conference via P2P technologies, the Internet is changing the
way we perform certain tasks, which incentivizes more users to join the network.
This user population growth as well as higher demand for a better access to the
Internet call for its expansion and development, and therefore, fuel the emergence of
new Internet technologies. However, many such technologies fail to get adopted by
their target user population due to various technical or socio-economical problems.
Understanding these (adoption) problems and the factors that play a significant role
in them, not only gives researchers a better insight into the dynamics of Internet
technology adoption, but also provides them with enhanced guidelines for designing
new Internet technologies. The primary motivation of this thesis is, therefore, to
provide researchers and network technology developers with an insight into what
factors are responsible for, or at least correlated with, the success or failure of an
Internet technology. We start by delving deeply into (arguably) the salient adoption problem the Internet has faced in its 40+ years of existence, and continues to face
for at least a foreseeable future, namely, IPv6 adoption. The study is composed of
an extensive measurement component, in addition to models that capture the roles
of different Internet stakeholders in the adoption of IPv6. Then, we extend it to a
broad set of Internet protocols, and investigate the factors that affect their adoptions.
The findings show performance as the primary factor that not only affected
the adoption of IPv6, but also plays a role in the adoption of any other network data
plane protocol. Moreover, they show how backward compatibility as well as other
factors can affect the adoption of various protocols. The study provides a number
of models and methodologies that can be extended to other similar problems in
various research areas, such as network technology adoption and design, two-sided
markets, and network economics
Recommended from our members
TOWARDS RELIABLE CIRCUMVENTION OF INTERNET CENSORSHIP
The Internet plays a crucial role in today\u27s social and political movements by facilitating the free circulation of speech, information, and ideas; democracy and human rights throughout the world critically depend on preserving and bolstering the Internet\u27s openness. Consequently, repressive regimes, totalitarian governments, and corrupt corporations regulate, monitor, and restrict the access to the Internet, which is broadly known as Internet \emph{censorship}. Most countries are improving the internet infrastructures, as a result they can implement more advanced censoring techniques. Also with the advancements in the application of machine learning techniques for network traffic analysis have enabled the more sophisticated Internet censorship. In this thesis, We take a close look at the main pillars of internet censorship, we will introduce new defense and attacks in the internet censorship literature.
Internet censorship techniques investigate users’ communications and they can decide to interrupt a connection to prevent a user from communicating with a specific entity. Traffic analysis is one of the main techniques used to infer information from internet communications. One of the major challenges to traffic analysis mechanisms is scaling the techniques to today\u27s exploding volumes of network traffic, i.e., they impose high storage, communications, and computation overheads. We aim at addressing this scalability issue by introducing a new direction for traffic analysis, which we call \emph{compressive traffic analysis}. Moreover, we show that, unfortunately, traffic analysis attacks can be conducted on Anonymity systems with drastically higher accuracies than before by leveraging emerging learning mechanisms. We particularly design a system, called \deepcorr, that outperforms the state-of-the-art by significant margins in correlating network connections. \deepcorr leverages an advanced deep learning architecture to \emph{learn} a flow correlation function tailored to complex networks. Also to be able to analyze the weakness of such approaches we show that an adversary can defeat deep neural network based traffic analysis techniques by applying statistically undetectable \emph{adversarial perturbations} on the patterns of live network traffic.
We also design techniques to circumvent internet censorship. Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. We propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to \emph{all} previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it \emph{downstream-only} decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Then, we propose to use game theoretic approaches to model the arms races between the censors and the censorship circumvention tools. This will allow us to analyze the effect of different parameters or censoring behaviors on the performance of censorship circumvention tools. We apply our methods on two fundamental problems in internet censorship.
Finally, to bring our ideas to practice, we designed a new censorship circumvention tool called \name. \name aims at increasing the collateral damage of censorship by employing a ``mass\u27\u27 of normal Internet users, from both censored and uncensored areas, to serve as circumvention proxies
Securitisation and the Role of the State in Delivering UK Cyber Security in a New-Medieval Cyberspace
Both the 2010 and the 2015 UK National Security Strategies identified threats from cyberspace as being among the most significant ‘Tier One’ threats to UK national security. These threats have been constructed as a threat to the state, a threat to the country’s Critical National Infrastructure (CNI), a threat to future economic success and a threat to businesses and individual citizens. As a result, the response to this threat has historically been seen as being a shared responsibility with most potential victims of cyber-attack responsible for their own security and the UK state agencies operating as a source of advice and guidance to promote best practice in the private sector. A range of government departments, including the Cabinet Office, MI5 and GCHQ among others, have been responsible for the government’s own cyber security. However, despite a budget allocation of £860 million for the 2010 – 2015 period, progress on reducing the frequency and cost of cyber-attacks was limited and the 2010 strategy for dealing with cyber security was widely seen as having failed.
This led to a new National Cyber Security Strategy (NCSS) in 2016 which indicated a significant change in approach, in particular with a more proactive role for the state through the formation of the National Cyber Security Centre (NCSC) and a £1.6 billion budget for cyber security between 2016 and 2021. However, cyber-attacks remain a significant issue for many organisations in both the public and private sector, and attacks such as the Wannacry ransomware/wiper attack, UK specific data breaches such as those witnessed in 2017 at Debenhams, Three, Wonga and ABTA, and breaches outside the UK that impacted UK citizens such as Equifax show that the frequency and impact of cyber security issues remain significant.
The underlying cause of the insecurity of cyberspace is reflected in the metaphorical description of cyberspace as the wild-west or as an ungoverned space. This is a result of cyberspace features such as anonymity, problematic attribution and a transnational nature that can limit the effective reach of law enforcement agencies. When these features are combined with an increasing societal and economic dependence on information technology and mediated data, this increases the potential economic impact of disruption to these systems and enhances the value of the data for both legitimate and illegitimate purposes.
This thesis argues that cyberspace is not ungoverned, and that it is more accurate to consider cyberspace to be a New Medieval environment with multiple overlapping authorities. In fact, cyberspace has always been far from ungoverned, it is just differently governed from a realspace Westphalian nation state system. The thesis also argues that cyberspace is currently experiencing a ‘Westphalian transformation’ with the UK state (among many others) engaged in a process designed to assert its authority and impose state primacy in cyberspace. This assertion of state authority is being driven by an identifiable process of securitisation in response to the constructed existential threat posed by unchecked cyberattacks by nation states and criminal enterprises. The Copenhagen School’s securitisation theory has been used to inform an original analysis of key speech acts by state securitising actors that has highlighted the key elements of the securitisation processes at work. This has clearly shown the development of the securitisation discourse, and the importance of referent objects and audience in asserting the state’s authority through the securitisation process.
Original qualitative data collected through in-depth semi-structured interviews with elite members of the cyber security community has provided insights to the key issues in cyber security that support the view that cyberspace has New Medieval characteristics. The interview data has also allowed for the construction of a view of the complexities of the cyberspace environment, the overlapping authorities of state and private sector organisations and some of the key issues that arise.
These issues are identified as being characteristic of a particularly complex form of policy problem referred to as a ‘wicked problem’. An understanding of cyber security as a wicked problem may aid in the identification of future possible policy approaches for cyber security policy in the UK