45 research outputs found

    Securing The Root: A Proposal For Distributing Signing Authority

    Get PDF
    Management of the Domain Name System (DNS) root zone file is a uniquely global policy problem. For the Internet to connect everyone, the root must be coordinated and compatible. While authority over the legacy root zone file has been contentious and divisive at times, everyone agrees that the Internet should be made more secure. A newly standardized protocol, DNS Security Extensions (DNSSEC), would make the Internet's infrastructure more secure. In order to fully implement DNSSEC, the procedures for managing the DNS root must be revised. Therein lies an opportunity. In revising the root zone management procedures, we can develop a new solution that diminishes the impact of the legacy monopoly held by the U.S. government and avoids another contentious debate over unilateral U.S. control. In this paper we describe the outlines of a new system for the management of a DNSSEC-enabled root. Our proposal distributes authority over securing the root, unlike another recently suggested method, while avoiding the risks and pitfalls of an intergovernmental power sharing scheme

    Search for Trust: An Analysis and Comparison of CA System Alternatives and Enhancements

    Get PDF
    The security of the Public Key Infrastructure has been reevaluated in response to Certification Authority (CA) compromise which resulted in the circulation of fraudulent certificates. These rogue certificates can and have been used to execute Man-in-the-Middle attacks and gain access to users’ sensitive information. In wake of these events, there has been a call for change to the extent of either securing the current system or altogether replacing it with an alternative design. This paper will explore the following proposals which have been put forth to replace or improve the CA system with the goal of aiding in the prevention and detection of MITM attacks and improving the trust infrastructure: Convergence, Perspectives, Mutually Endorsed Certification Authority Infrastructure (MECAI), DNS-Based Authentication of Named Entities (DANE), DNS Certification Authority Authorization (CAA) Resource Records, Public Key Pinning, Sovereign Keys, and Certificate Transparency. Provided are brief descriptions of each proposal, along with an indication of the pros and cons of each system. Following this, a new metric is applied which, according to a set of criteria, ranks each proposal and gives readers an idea of the costs and benefits of implementing the proposed system and the potential strengths and weaknesses of the design. We conclude with recommendations for further research and remark on the proposals with the most potential going forward

    TLS/PKI Challenges and certificate pinning techniques for IoT and M2M secure communications

    Get PDF
    Transport Layer Security is becoming the de facto standard to provide end-to-end security in the current Internet. IoT and M2M scenarios are not an exception since TLS is also being adopted there. The ability of TLS for negotiating any security parameter, its flexibility and extensibility are responsible for its wide adoption but also for several attacks. Moreover, as it relies on Public Key Infrastructure (PKI) for authentication, it is also affected by PKI problems. Considering the advent of IoT/M2M scenarios and their particularities, it is necessary to have a closer look at TLS history to evaluate the potential challenges of using TLS and PKI in these scenarios. According to this, the article provides a deep revision of several security aspects of TLS and PKI, with a particular focus on current Certificate Pinning solutions in order to illustrate the potential problems that should be addressed

    Making DNSSEC Future Proof

    Get PDF

    Deploying DNSSEC in islands of security

    Get PDF
    The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services.LaTeX with hyperref packagepdfTeX-1.40.1

    Lowering Legal Barriers to RPKI Adoption

    Get PDF
    Across the Internet, mistaken and malicious routing announcements impose significant costs on users and network operators. To make routing announcements more reliable and secure, Internet coordination bodies have encouraged network operators to adopt the Resource Public Key Infrastructure (“RPKI”) framework. Despite this encouragement, RPKI’s adoption rates are low, especially in North America.This report presents the results of a year-long investigation into the hypothesis—widespread within the network operator community—that legal issues pose barriers to RPKI adoption and are one cause of the disparities between North America and other regions of the world. On the basis of interviews and analysis of the legal framework governing RPKI, the report evaluates the issues raised by community members and proposes a number of strategies to reduce or circumvent the barriers that are material. The report also describes substantial action taken this year by the American Registry for Internet Numbers (“ARIN”) and other private organizations in light of public dialogue about RPKI

    Improving Anycast with Measurements

    Get PDF
    Since the first Distributed Denial-of-Service (DDoS) attacks were launched, the strength of such attacks has been steadily increasing, from a few megabits per second to well into the terabit/s range. The damage that these attacks cause, mostly in terms of financial cost, has prompted researchers and operators alike to investigate and implement mitigation strategies. Examples of such strategies include local filtering appliances, Border Gateway Protocol (BGP)-based blackholing and outsourced mitigation in the form of cloud-based DDoS protection providers. Some of these strategies are more suited towards high bandwidth DDoS attacks than others. For example, using a local filtering appliance means that all the attack traffic will still pass through the owner's network. This inherently limits the maximum capacity of such a device to the bandwidth that is available. BGP Blackholing does not have such limitations, but can, as a side-effect, cause service disruptions to end-users. A different strategy, that has not attracted much attention in academia, is based on anycast. Anycast is a technique that allows operators to replicate their service across different physical locations, while keeping that service addressable with just a single IP-address. It relies on the BGP to effectively load balance users. In practice, it is combined with other mitigation strategies to allow those to scale up. Operators can use anycast to scale their mitigation capacity horizontally. Because anycast relies on BGP, and therefore in essence on the Internet itself, it can be difficult for network engineers to fine tune this balancing behavior. In this thesis, we show that that is indeed the case through two different case studies. In the first, we focus on an anycast service during normal operations, namely the Google Public DNS, and show that the routing within this service is far from optimal, for example in terms of distance between the client and the server. In the second case study, we observe the root DNS, while it is under attack, and show that even though in aggregate the bandwidth available to this service exceeds the attack we observed, clients still experienced service degradation. This degradation was caused due to the fact that some sites of the anycast service received a much higher share of traffic than others. In order for operators to improve their anycast networks, and optimize it in terms of resilience against DDoS attacks, a method to assess the actual state of such a network is required. Existing methodologies typically rely on external vantage points, such as those provided by RIPE Atlas, and are therefore limited in scale, and inherently biased in terms of distribution. We propose a new measurement methodology, named Verfploeter, to assess the characteristics of anycast networks in terms of client to Point-of-Presence (PoP) mapping, i.e. the anycast catchment. This method does not rely on external vantage points, is free of bias and offers a much higher resolution than any previous method. We validated this methodology by deploying it on a testbed that was locally developed, as well as on the B root DNS. We showed that the increased \textit{resolution} of this methodology improved our ability to assess the impact of changes in the network configuration, when compared to previous methodologies. As final validation we implement Verfploeter on Cloudflare's global-scale anycast Content Delivery Network (CDN), which has almost 200 global Points-of-Presence and an aggregate bandwidth of 30 Tbit/s. Through three real-world use cases, we demonstrate the benefits of our methodology: Firstly, we show that changes that occur when withdrawing routes from certain PoPs can be accurately mapped, and that in certain cases the effect of taking down a combination of PoPs can be calculated from individual measurements. Secondly, we show that Verfploeter largely reinstates the ping to its former glory, showing how it can be used to troubleshoot network connectivity issues in an anycast context. Thirdly, we demonstrate how accurate anycast catchment maps offer operators a new and highly accurate tool to identify and filter spoofed traffic. Where possible, we make datasets collected over the course of the research in this thesis available as open access data. The two best (open) dataset awards that were awarded for these datasets confirm that they are a valued contribution. In summary, we have investigated two large anycast services and have shown that their deployments are not optimal. We developed a novel measurement methodology, that is free of bias and is able to obtain highly accurate anycast catchment mappings. By implementing this methodology and deploying it on a global-scale anycast network we show that our method adds significant value to the fast-growing anycast CDN industry and enables new ways of detecting, filtering and mitigating DDoS attacks

    Attacking and securing Network Time Protocol

    Get PDF
    Network Time Protocol (NTP) is used to synchronize time between computer systems communicating over unreliable, variable-latency, and untrusted network paths. Time is critical for many applications; in particular it is heavily utilized by cryptographic protocols. Despite its importance, the community still lacks visibility into the robustness of the NTP ecosystem itself, the integrity of the timing information transmitted by NTP, and the impact that any error in NTP might have upon the security of other protocols that rely on timing information. In this thesis, we seek to accomplish the following broad goals: 1. Demonstrate that the current design presents a security risk, by showing that network attackers can exploit NTP and then use it to attack other core Internet protocols that rely on time. 2. Improve NTP to make it more robust, and rigorously analyze the security of the improved protocol. 3. Establish formal and precise security requirements that should be satisfied by a network time-synchronization protocol, and prove that these are sufficient for the security of other protocols that rely on time. We take the following approach to achieve our goals incrementally. 1. We begin by (a) scrutinizing NTP's core protocol (RFC 5905) and (b) statically analyzing code of its reference implementation to identify vulnerabilities in protocol design, ambiguities in specifications, and flaws in reference implementations. We then leverage these observations to show several off- and on-path denial-of-service and time-shifting attacks on NTP clients. We then show cache-flushing and cache-sticking attacks on DNS(SEC) that leverage NTP. We quantify the attack surface using Internet measurements, and suggest simple countermeasures that can improve the security of NTP and DNS(SEC). 2. Next we move beyond identifying attacks and leverage ideas from Universal Composability (UC) security framework to develop a cryptographic model for attacks on NTP's datagram protocol. We use this model to prove the security of a new backwards-compatible protocol that correctly synchronizes time in the face of both off- and on-path network attackers. 3. Next, we propose general security notions for network time-synchronization protocols within the UC framework and formulate ideal functionalities that capture a number of prevalent forms of time measurement within existing systems. We show how they can be realized by real-world protocols (including but not limited to NTP), and how they can be used to assert security of time-reliant applications-specifically, cryptographic certificates with revocation and expiration times. Our security framework allows for a clear and modular treatment of the use of time in security-sensitive systems. Our work makes the core NTP protocol and its implementations more robust and secure, thus improving the security of applications and protocols that rely on time
    corecore