55 research outputs found
The Impact of IPv6 on Penetration Testing
In this paper we discuss the impact the use of IPv6 has on remote penetration testing of servers and web applications. Several modifications to the penetration testing process are proposed to accommodate IPv6. Among these modifications are ways of performing fragmentation attacks, host discovery and brute-force protection. We also propose new checks for IPv6-specific vulnerabilities, such as bypassing firewalls using extension headers and reaching internal hosts through available transition mechanisms. The changes to the penetration testing process proposed in this paper can be used by security companies to make their penetration testing process applicable to IPv6 targets
A Brave New World: Studies on the Deployment and Security of the Emerging IPv6 Internet.
Recent IPv4 address exhaustion events are ushering in a new era of
rapid transition to the next generation Internet protocol---IPv6. Via
Internet-scale experiments and data analysis, this dissertation
characterizes the adoption and security of the emerging IPv6 network.
The work includes three studies, each the largest of its kind,
examining various facets of the new network protocol's deployment,
routing maturity, and security.
The first study provides an analysis of ten years of IPv6 deployment
data, including quantifying twelve metrics across ten global-scale
datasets, and affording a holistic understanding of the state and
recent progress of the IPv6 transition. Based on cross-dataset
analysis of relative global adoption rates and across features of the
protocol, we find evidence of a marked shift in the pace and nature
of adoption in recent years and observe that higher-level metrics of
adoption lag lower-level metrics.
Next, a network telescope study covering the IPv6 address space of the
majority of allocated networks provides insight into the early state
of IPv6 routing. Our analyses suggest that routing of average IPv6
prefixes is less stable than that of IPv4. This instability is
responsible for the majority of the captured misdirected IPv6 traffic.
Observed dark (unallocated destination) IPv6 traffic shows substantial
differences from the unwanted traffic seen in IPv4---in both character
and scale.
Finally, a third study examines the state of IPv6 network security
policy. We tested a sample of 25 thousand routers and 520 thousand
servers against sets of TCP and UDP ports commonly targeted by
attackers. We found systemic discrepancies between intended
security policy---as codified in IPv4---and deployed IPv6 policy.
Such lapses in ensuring that the IPv6 network is properly managed and
secured are leaving thousands of important devices more vulnerable to
attack than before IPv6 was enabled.
Taken together, findings from our three studies suggest that IPv6 has
reached a level and pace of adoption, and shows patterns of use, that
indicates serious production employment of the protocol on a broad
scale. However, weaker IPv6 routing and security are evident, and
these are leaving early dual-stack networks less robust than the IPv4
networks they augment.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120689/1/jczyz_1.pd
The Closed Resolver Project: Measuring the Deployment of Source Address Validation of Inbound Traffic
Source Address Validation (SAV) is a standard aimed at discarding packets
with spoofed source IP addresses. The absence of SAV for outgoing traffic has
been known as a root cause of Distributed Denial-of-Service (DDoS) attacks and
received widespread attention. While less obvious, the absence of inbound
filtering enables an attacker to appear as an internal host of a network and
may reveal valuable information about the network infrastructure. Inbound IP
spoofing may amplify other attack vectors such as DNS cache poisoning or the
recently discovered NXNSAttack. In this paper, we present the preliminary
results of the Closed Resolver Project that aims at mitigating the problem of
inbound IP spoofing. We perform the first Internet-wide active measurement
study to enumerate networks that filter or do not filter incoming packets by
their source address, for both the IPv4 and IPv6 address spaces. To achieve
this, we identify closed and open DNS resolvers that accept spoofed requests
coming from the outside of their network. The proposed method provides the most
complete picture of inbound SAV deployment by network providers. Our
measurements cover over 55 % IPv4 and 27 % IPv6 Autonomous Systems (AS) and
reveal that the great majority of them are fully or partially vulnerable to
inbound spoofing. By identifying dual-stacked DNS resolvers, we additionally
show that inbound filtering is less often deployed for IPv6 than it is for
IPv4. Overall, we discover 13.9 K IPv6 open resolvers that can be exploited for
amplification DDoS attacks - 13 times more than previous work. Furthermore, we
enumerate uncover 4.25 M IPv4 and 103 K IPv6 vulnerable closed resolvers that
could only be detected thanks to our spoofing technique, and that pose a
significant threat when combined with the NXNSAttack.Comment: arXiv admin note: substantial text overlap with arXiv:2002.0044
Passive Observations of a Large DNS Service:2.5 Years in the Life of Google
In 2009 Google launched its Public DNS service, with its characteristic IP address 8.8.8.8. Since then, this service has grown to be the largest and most well-known DNS service in existence. The popularity of public DNS services has been disruptive for Content Delivery Networks (CDNs). CDNs rely on IP information to geo-Iocate clients. This no longer works in the presence of public resolvers, which led to the introduction of the EDNSO Client Subnet extension. ECS allows resolvers to reveal part of a client's IP address to authoritative name servers and helps CDNs pinpoint client origin. A useful side effect of ECS is that it can be used to study the workings of public DNS resolvers. In this paper, we leverage this side effect of ECS to study Google Public DNS. From a dataset of 3.7 billion DNS queries spanning 2.5 years, we extract ECS information and perform a longitudinal analysis of which clients are served from which Point-of-Presence. Our study focuses on two aspects of GPDNS. First, we show that while GPDNS has PoPs in many countries, traffic is frequently routed out of country, even if that was not necessary. Often this reduces performance, and perhaps more importantly, exposes DNS requests to state-level surveillance. Second, we study how GPDNS is used by clients. We show that end-users switch to GPDNS en masse when their ISP's DNS service is unresponsive, and do not switch back. We also find that many e-mail providers configure GPDNS as the resolver for their servers. This raises serious privacy concerns, as DNS queries from mail servers reveal information about hosts they exchange mail with. Because of GPDNS's use of ECS, this sensitive information is not only revealed to Google, but also to any operator of an authoritative name server that receives ECS-enabled queries from GPDNS during the lookup process
Recommended from our members
Identifying and Preventing Large-scale Internet Abuse
The widespread access to the Internet and the ubiquity of web-based services make it easy to communicate and interact globally. Unfortunately, the software and protocols implementing the functionality of these services are often vulnerable to attacks. In turn, an attacker can exploit them to compromise, take over, and abuse the services for her own nefarious purposes. In this dissertation, we aim to better understand such attacks, and we develop methods and algorithms to detect and prevent them, which we evaluate on large-scale datasets.First, we detail Meerkat, a system to detect a visible way in which websites are being compromised, namely website defacements. They can inflict significant harm on the websitesâ operators through the loss of sales, the loss in reputation, or because of legal ramifications. Meerkat requires no prior knowledge about the websitesâ content or their structure, but only the Uniform Resource Identifier (URI) at which they can be reached. By design, Meerkat mimics how a human analyst decides if a website was defaced when viewing it in a browser, by using computer vision techniques. Thus, it tackles the problem of detecting website defacements through their attention-seeking nature, their goal and purpose, rather than code or data artifacts that they might exhibit. In turn, it is much harder for an attacker to evade our system, as she needs to change her modus operandi. When Meerkat detects a website as defaced, the website can automatically be put into maintenance mode or restored to a known good state.An attacker, however, is not limited to abuse a compromised website in a way that is visible to the websiteâs visitors. Instead, she can misuse the website to infect its visitors with malicious software (malware). Although malware is well studied, identifying malicious websites remains a major challenge in todayâs Internet. Second, we introduce Delta, a novel, purely static analysis approach that extracts change-related features between two versions of the same website, uses machine learning to derive a model of website changes, detects if an introduced change was malicious or benign, identifies the underlying infection vector based on clustering, and generates an identifying signature. Furthermore, due to the way Delta clusters campaigns, it can uncover infection campaigns that leverage specific vulnerable applications as a distribution channel, and it can greatly reduce the human labor necessary to uncover the application responsible for a serviceâs compromise.Third, we investigate the practicality and impact of domain takeover attacks, which an attacker can similarly abuse to spread misinformation or malware, and we present a defense on how such takeover attacks can be rendered toothless. Specifically, the new elasticity of Internet resources, in particular Internet protocol (IP) addresses in the context of Infrastructure-as-a-Service cloud service providers, combined with previously made protocol assumptions can lead to security issues. In Cloud Strife, we show that this dynamic component paired with recent developments in trust-based ecosystems (e.g., Transport Layer Security (TLS) certificates) creates so far unknown attack vectors. For example, a substantial number of stale domain name system (DNS) records points to readily available IP addresses in clouds, yet, they are still actively attempted to be accessed. Often, these records belong to discontinued services that were previously hosted in the cloud. We demonstrate that it is practical, and time and cost-efficient for attackers to allocate the IP addresses to which stale DNS records point. Further considering the ubiquity of domain validation in trust ecosystems, an attacker can impersonate the service by obtaining and using a valid certificate that is trusted by all major operating systems and browsers, which severely increases the attackersâ capabilities. The attacker can then also exploit residual trust in the domain name for phishing, receiving and sending emails, or possibly distributing code to clients that load remote code from the domain (e.g., loading of native code by mobile apps, or JavaScript libraries by websites). To prevent such attacks, we introduce a new authentication method for trust-based domain validation that mitigates staleness issues without incurring additional certificate requester effort by incorporating existing trust into the validation process.Finally, the analyses of Delta, Meerkat, and Cloud Strife have made use of large-scale measurements to assess our approachesâ impact and viability. Indeed, security research in general has made extensive use of exhaustive Internet-wide scans over the recent years, as they can provide significant insights into the state of security of the Internet (e.g., if classes of devices are behaving maliciously, or if they might be insecure and could turn malicious in an instant). However, the address space of the Internetâs core addressing protocol (Internet Protocol version 4; IPv4) is exhausted, and a migration to its successor (Internet Protocol version 6; IPv6), the only accepted long-term solution, is inevitable. In turn, to better understand the security of devices connected to the Internet, in particular Internet of Things devices, it is imperative to include IPv6 addresses in security evaluations and scans. Unfortunately, it is practically infeasible to iterate through the entire IPv6 address space, as it is 296 times larger than the IPv4 address space. Without enumerating hosts prior to scanning, we will be unable to retain visibility into the overall security of Internet-connected devices in the future, and we will be unable to detect and prevent their abuse or compromise. To mitigate this blind spot, we introduce a novel technique to enumerate part of the IPv6 address space by walking DNSSEC-signed IPv6 reverse zones. We show (i) that enumerating active IPv6 hosts is practical without a preferential network position contrary to common belief, (ii) that the security of active IPv6 hosts is currently still lagging behind the security state of IPv4 hosts, and (iii) that unintended default IPv6 connectivity is a major security issue
DNS Lame delegations: A case-study of public reverse DNS records in the African Region
The DNS, as one of the oldest components of the modern Internet, has been studied multiple times. It is a known fact that operational issues such as mis-configured name servers affect the responsiveness of the DNS service which could lead to delayed responses or failed queries. One of such misconfigurations is lame delegation and this article explains how it can be detected and also provides guidance to the African Internet community as to whether a policy lame reverse DNS should be enforced. It also gives an overview of the degree of lameness of the AFRINIC reverse domains where it was found that 45% of all reverse domains are lame
Konfiguraationhallinnan datan kÀyttö verkkoinfrastruktuurin hallintaan
Configuration management software running on nodes solves problems such as configuration drift on the nodes themselves, but the necessary node configuration data can also be utilized in managing network infrastructure, for example to reduce configuration errors by facilitating node life cycle management. Many configuration management software systems depend on a working network, but we can utilize the data to create large parts of the network infrastructure configuration itself using node data from the configuration management system before the nodes themselves are provisioned, as well as remove obsolete configuration as nodes are decommissioned.KonfiguraationhallintajÀrjestelmien kÀyttö ratkaisee tietoliikenneverkon solmuilla (node) esiintyviÀ ongelmia kuten konfiguraation ajelehtimista, mutta konfiguraationhallintaan vaadittua tietovarastoa voidaan kÀyttÀÀ myös verkkoinfrastruktuurin hallinnassa, esimerkiksi vÀhentÀmÀÀn konfiguraatiovirheitÀ helpottamalla solmujen elinkaaren hallintaa. Useat konfiguraationhallintaohjelmistot vaativat toimivan verkon, mutta suuria osia verkkoinfrastruktuurin konfiguraatiosta voidaan luoda kÀyttÀen konfiguraatiohallinnan tietovarastoa ennen kuin solmuja pystytetÀÀn, sekÀ voidaan varmistaa vanhentuneen konfiguraation poistuminen solmuja alas ajattaessa
D3.6.1: Cookbook for IPv6 Renumbering in SOHO and Backbone Networks
In this text we present the results of a set of experiments that are designed to be a first step in the process of analysing how effective network renumbering procedures may be in the context of IPv6. An IPv6 site will need to get provider assigned (PA) address space from its upstream ISP. Because provider independent (PI) address space is not available for IPv6, a site wishing to change provider will need to renumber from its old network prefix to the new one. We look at the scenarios, issues and enablers for such renumbering, and present results and initial conclusions and recommendations in the context of SOHO and backbone networking. A subsequent deliverable (D3.6.2) will refine these findings, adding additional results and context from enterprise and ISP renumbering scenarios
ZDNS: A Fast DNS Toolkit for Internet Measurement
Active DNS measurement is fundamental to understanding and improving the DNS
ecosystem. However, the absence of an extensible, high-performance, and
easy-to-use DNS toolkit has limited both the reproducibility and coverage of
DNS research. In this paper, we introduce ZDNS, a modular and open-source
active DNS measurement framework optimized for large-scale research studies of
DNS on the public Internet. We describe ZDNS' architecture, evaluate its
performance, and present two case studies that highlight how the tool can be
used to shed light on the operational complexities of DNS. We hope that ZDNS
will enable researchers to better -- and in a more reproducible manner --
understand Internet behavior.Comment: Proceedings of the 22nd ACM Internet Measurement Conference. 202
- âŠ