51 research outputs found

    Addressing the challenges of modern DNS:a comprehensive tutorial

    Get PDF
    The Domain Name System (DNS) plays a crucial role in connecting services and users on the Internet. Since its first specification, DNS has been extended in numerous documents to keep it fit for today’s challenges and demands. And these challenges are many. Revelations of snooping on DNS traffic led to changes to guarantee confidentiality of DNS queries. Attacks to forge DNS traffic led to changes to shore up the integrity of the DNS. Finally, denial-of-service attack on DNS operations have led to new DNS operations architectures. All of these developments make DNS a highly interesting, but also highly challenging research topic. This tutorial – aimed at graduate students and early-career researchers – provides a overview of the modern DNS, its ongoing development and its open challenges. This tutorial has four major contributions. We first provide a comprehensive overview of the DNS protocol. Then, we explain how DNS is deployed in practice. This lays the foundation for the third contribution: a review of the biggest challenges the modern DNS faces today and how they can be addressed. These challenges are (i) protecting the confidentiality and (ii) guaranteeing the integrity of the information provided in the DNS, (iii) ensuring the availability of the DNS infrastructure, and (iv) detecting and preventing attacks that make use of the DNS. Last, we discuss which challenges remain open, pointing the reader towards new research areas

    The Impact of DNSSEC on the Internet Landscape

    Get PDF
    In this dissertation we investigate the security deficiencies of the Domain Name System (DNS) and assess the impact of the DNSSEC security extensions. DNS spoofing attacks divert an application to the wrong server, but are also used routinely for blocking access to websites. We provide evidence for systematic DNS spoofing in China and Iran with measurement-based analyses, which allow us to examine the DNS spoofing filters from vantage points outside of the affected networks. Third-parties in other countries can be affected inadvertently by spoofing-based domain filtering, which could be averted with DNSSEC. The security goals of DNSSEC are data integrity and authenticity. A point solution called NSEC3 adds a privacy assertion to DNSSEC, which is supposed to prevent disclosure of the domain namespace as a whole. We present GPU-based attacks on the NSEC3 privacy assertion, which allow efficient recovery of the namespace contents. We demonstrate with active measurements that DNSSEC has found wide adoption after initial hesitation. At server-side, there are more than five million domains signed with DNSSEC. A portion of them is insecure due to insufficient cryptographic key lengths or broken due to maintenance failures. At client-side, we have observed a worldwide increase of DNSSEC validation over the last three years, though not necessarily on the last mile. Deployment of DNSSEC validation on end hosts is impaired by intermediate caching components, which degrade the availability of DNSSEC. However, intermediate caches contribute to the performance and scalability of the Domain Name System, as we show with trace-driven simulations. We suggest that validating end hosts utilize intermediate caches by default but fall back to autonomous name resolution in case of DNSSEC failures.In dieser Dissertation werden die Sicherheitsdefizite des Domain Name Systems (DNS) untersucht und die Auswirkungen der DNSSEC-Sicherheitserweiterungen bewertet. DNS-Spoofing hat den Zweck eine Anwendung zum falschen Server umzuleiten, wird aber auch regelmäßig eingesetzt, um den Zugang zu Websites zu sperren. Durch messbasierte Analysen wird in dieser Arbeit die systematische Durchführung von DNS-Spoofing-Angriffen in China und im Iran belegt, wobei sich die Messpunkte außerhalb der von den Sperrfiltern betroffenen Netzwerke befinden. Es wird gezeigt, dass Dritte in anderen Ländern durch die Spoofing-basierten Sperrfilter unbeabsichtigt beeinträchtigt werden können, was mit DNSSEC verhindert werden kann. Die Sicherheitsziele von DNSSEC sind Datenintegrität und Authentizität. Die NSEC3-Erweiterung sichert zudem die Privatheit des Domainnamensraums, damit die Inhalte eines DNSSEC-Servers nicht in Gänze ausgelesen werden können. In dieser Arbeit werden GPU-basierte Angriffsmethoden auf die von NSEC3 zugesicherte Privatheit vorgestellt, die eine effiziente Wiederherstellung des Domainnamensraums ermöglichen. Ferner wird mit aktiven Messmethoden die Verbreitung von DNSSEC untersucht, die nach anfänglicher Zurückhaltung deutlich zugenommen hat. Auf der Serverseite gibt es mehr als fünf Millionen mit DNSSEC signierte Domainnamen. Ein Teil davon ist aufgrund von unzureichenden kryptographischen Schlüssellängen unsicher, ein weiterer Teil zudem aufgrund von Wartungsfehlern nicht mit DNSSEC erreichbar. Auf der Clientseite ist der Anteil der DNSSEC-Validierung in den letzten drei Jahren weltweit gestiegen. Allerdings ist hierbei offen, ob die Validierung nahe bei den Endgeräten stattfindet, um unvertraute Kommunikationspfade vollständig abzusichern. Der Einsatz von DNSSEC-Validierung auf Endgeräten wird durch zwischengeschaltete DNS-Cache-Komponenten erschwert, da hierdurch die Verfügbarkeit von DNSSEC beeinträchtigt wird. Allerdings tragen zwischengeschaltete Caches zur Performance und Skalierbarkeit des Domain Name Systems bei, wie in dieser Arbeit mit messbasierten Simulationen gezeigt wird. Daher sollten Endgeräte standardmäßig die vorhandene DNS-Infrastruktur nutzen, bei Validierungsfehlern jedoch selbständig die DNSSEC-Zielserver anfragen, um im Cache gespeicherte, fehlerhafte DNS-Antworten zu umgehen

    Attacking and securing Network Time Protocol

    Get PDF
    Network Time Protocol (NTP) is used to synchronize time between computer systems communicating over unreliable, variable-latency, and untrusted network paths. Time is critical for many applications; in particular it is heavily utilized by cryptographic protocols. Despite its importance, the community still lacks visibility into the robustness of the NTP ecosystem itself, the integrity of the timing information transmitted by NTP, and the impact that any error in NTP might have upon the security of other protocols that rely on timing information. In this thesis, we seek to accomplish the following broad goals: 1. Demonstrate that the current design presents a security risk, by showing that network attackers can exploit NTP and then use it to attack other core Internet protocols that rely on time. 2. Improve NTP to make it more robust, and rigorously analyze the security of the improved protocol. 3. Establish formal and precise security requirements that should be satisfied by a network time-synchronization protocol, and prove that these are sufficient for the security of other protocols that rely on time. We take the following approach to achieve our goals incrementally. 1. We begin by (a) scrutinizing NTP's core protocol (RFC 5905) and (b) statically analyzing code of its reference implementation to identify vulnerabilities in protocol design, ambiguities in specifications, and flaws in reference implementations. We then leverage these observations to show several off- and on-path denial-of-service and time-shifting attacks on NTP clients. We then show cache-flushing and cache-sticking attacks on DNS(SEC) that leverage NTP. We quantify the attack surface using Internet measurements, and suggest simple countermeasures that can improve the security of NTP and DNS(SEC). 2. Next we move beyond identifying attacks and leverage ideas from Universal Composability (UC) security framework to develop a cryptographic model for attacks on NTP's datagram protocol. We use this model to prove the security of a new backwards-compatible protocol that correctly synchronizes time in the face of both off- and on-path network attackers. 3. Next, we propose general security notions for network time-synchronization protocols within the UC framework and formulate ideal functionalities that capture a number of prevalent forms of time measurement within existing systems. We show how they can be realized by real-world protocols (including but not limited to NTP), and how they can be used to assert security of time-reliant applications-specifically, cryptographic certificates with revocation and expiration times. Our security framework allows for a clear and modular treatment of the use of time in security-sensitive systems. Our work makes the core NTP protocol and its implementations more robust and secure, thus improving the security of applications and protocols that rely on time

    Recent Trends on Privacy-Preserving Technologies under Standardization at the IETF

    Full text link
    End-users are concerned about protecting the privacy of their sensitive personal data that are generated while working on information systems. This extends to both the data they actively provide including personal identification in exchange for products and services as well as its related metadata such as unnecessary access to their location. This is when certain privacy-preserving technologies come into a place where Internet Engineering Task Force (IETF) plays a major role in incorporating such technologies at the fundamental level. Thus, this paper offers an overview of the privacy-preserving mechanisms for layer 3 (i.e. IP) and above that are currently under standardization at the IETF. This includes encrypted DNS at layer 5 classified as DNS-over-TLS (DoT), DNS-over-HTTPS (DoH), and DNS-over-QUIC (DoQ) where the underlying technologies like QUIC belong to layer 4. Followed by that, we discuss Privacy Pass Protocol and its application in generating Private Access Tokens and Passkeys to replace passwords for authentication at the application layer (i.e. end-user devices). Lastly, to protect user privacy at the IP level, Private Relays and MASQUE are discussed. This aims to make designers, implementers, and users of the Internet aware of privacy-related design choices.Comment: 9 pages, 5 figures, 1 tabl

    Upgrading HTTPS in mid-air: An Empirical Study of Strict Transport Security and Key Pinning

    Full text link

    Scalable Techniques for Anomaly Detection

    Get PDF
    Computer networks are constantly being attacked by malicious entities for various reasons. Network based attacks include but are not limited to, Distributed Denial of Service (DDoS), DNS based attacks, Cross-site Scripting (XSS) etc. Such attacks have exploited either the network protocol or the end-host software vulnerabilities for perpetration. Current network traffic analysis techniques employed for detection and/or prevention of these anomalies suffer from significant delay or have only limited scalability because of their huge resource requirements. This dissertation proposes more scalable techniques for network anomaly detection. We propose using DNS analysis for detecting a wide variety of network anomalies. The use of DNS is motivated by the fact that DNS traffic comprises only 2-3% of total network traffic reducing the burden on anomaly detection resources. Our motivation additionally follows from the observation that almost any Internet activity (legitimate or otherwise) is marked by the use of DNS. We propose several techniques for DNS traffic analysis to distinguish anomalous DNS traffic patterns which in turn identify different categories of network attacks. First, we present MiND, a system to detect misdirected DNS packets arising due to poisoned name server records or due to local infections such as caused by worms like DNSChanger. MiND validates misdirected DNS packets using an externally collected database of authoritative name servers for second or third-level domains. We deploy this tool at the edge of a university campus network for evaluation. Secondly, we focus on domain-fluxing botnet detection by exploiting the high entropy inherent in the set of domains used for locating the Command and Control (C&C) server. We apply three metrics namely the Kullback-Leibler divergence, the Jaccard Index, and the Edit distance, to different groups of domain names present in Tier-1 ISP DNS traces obtained from South Asia and South America. Our evaluation successfully detects existing domain-fluxing botnets such as Conficker and also recognizes new botnets. We extend this approach by utilizing DNS failures to improve the latency of detection. Alternatively, we propose a system which uses temporal and entropy-based correlation between successful and failed DNS queries, for fluxing botnet detection. We also present an approach which computes the reputation of domains in a bipartite graph of hosts within a network, and the domains accessed by them. The inference technique utilizes belief propagation, an approximation algorithm for marginal probability estimation. The computation of reputation scores is seeded through a small fraction of domains found in black and white lists. An application of this technique, on an HTTP-proxy dataset from a large enterprise, shows a high detection rate with low false positive rates

    Improving Anycast with Measurements

    Get PDF
    Since the first Distributed Denial-of-Service (DDoS) attacks were launched, the strength of such attacks has been steadily increasing, from a few megabits per second to well into the terabit/s range. The damage that these attacks cause, mostly in terms of financial cost, has prompted researchers and operators alike to investigate and implement mitigation strategies. Examples of such strategies include local filtering appliances, Border Gateway Protocol (BGP)-based blackholing and outsourced mitigation in the form of cloud-based DDoS protection providers. Some of these strategies are more suited towards high bandwidth DDoS attacks than others. For example, using a local filtering appliance means that all the attack traffic will still pass through the owner's network. This inherently limits the maximum capacity of such a device to the bandwidth that is available. BGP Blackholing does not have such limitations, but can, as a side-effect, cause service disruptions to end-users. A different strategy, that has not attracted much attention in academia, is based on anycast. Anycast is a technique that allows operators to replicate their service across different physical locations, while keeping that service addressable with just a single IP-address. It relies on the BGP to effectively load balance users. In practice, it is combined with other mitigation strategies to allow those to scale up. Operators can use anycast to scale their mitigation capacity horizontally. Because anycast relies on BGP, and therefore in essence on the Internet itself, it can be difficult for network engineers to fine tune this balancing behavior. In this thesis, we show that that is indeed the case through two different case studies. In the first, we focus on an anycast service during normal operations, namely the Google Public DNS, and show that the routing within this service is far from optimal, for example in terms of distance between the client and the server. In the second case study, we observe the root DNS, while it is under attack, and show that even though in aggregate the bandwidth available to this service exceeds the attack we observed, clients still experienced service degradation. This degradation was caused due to the fact that some sites of the anycast service received a much higher share of traffic than others. In order for operators to improve their anycast networks, and optimize it in terms of resilience against DDoS attacks, a method to assess the actual state of such a network is required. Existing methodologies typically rely on external vantage points, such as those provided by RIPE Atlas, and are therefore limited in scale, and inherently biased in terms of distribution. We propose a new measurement methodology, named Verfploeter, to assess the characteristics of anycast networks in terms of client to Point-of-Presence (PoP) mapping, i.e. the anycast catchment. This method does not rely on external vantage points, is free of bias and offers a much higher resolution than any previous method. We validated this methodology by deploying it on a testbed that was locally developed, as well as on the B root DNS. We showed that the increased \textit{resolution} of this methodology improved our ability to assess the impact of changes in the network configuration, when compared to previous methodologies. As final validation we implement Verfploeter on Cloudflare's global-scale anycast Content Delivery Network (CDN), which has almost 200 global Points-of-Presence and an aggregate bandwidth of 30 Tbit/s. Through three real-world use cases, we demonstrate the benefits of our methodology: Firstly, we show that changes that occur when withdrawing routes from certain PoPs can be accurately mapped, and that in certain cases the effect of taking down a combination of PoPs can be calculated from individual measurements. Secondly, we show that Verfploeter largely reinstates the ping to its former glory, showing how it can be used to troubleshoot network connectivity issues in an anycast context. Thirdly, we demonstrate how accurate anycast catchment maps offer operators a new and highly accurate tool to identify and filter spoofed traffic. Where possible, we make datasets collected over the course of the research in this thesis available as open access data. The two best (open) dataset awards that were awarded for these datasets confirm that they are a valued contribution. In summary, we have investigated two large anycast services and have shown that their deployments are not optimal. We developed a novel measurement methodology, that is free of bias and is able to obtain highly accurate anycast catchment mappings. By implementing this methodology and deploying it on a global-scale anycast network we show that our method adds significant value to the fast-growing anycast CDN industry and enables new ways of detecting, filtering and mitigating DDoS attacks
    • …
    corecore