6 research outputs found

    Clusters in the Expanse: Understanding and Unbiasing IPv6 Hitlists

    Get PDF
    Network measurements are an important tool in understanding the Internet. Due to the expanse of the IPv6 address space, exhaustive scans as in IPv4 are not possible for IPv6. In recent years, several studies have proposed the use of target lists of IPv6 addresses, called IPv6 hitlists. In this paper, we show that addresses in IPv6 hitlists are heavily clustered. We present novel techniques that allow IPv6 hitlists to be pushed from quantity to quality. We perform a longitudinal active measurement study over 6 months, targeting more than 50 M addresses. We develop a rigorous method to detect aliased prefixes, which identifies 1.5 % of our prefixes as aliased, pertaining to about half of our target addresses. Using entropy clustering, we group the entire hitlist into just 6 distinct addressing schemes. Furthermore, we perform client measurements by leveraging crowdsourcing. To encourage reproducibility in network measurement research and to serve as a starting point for future IPv6 studies, we publish source code, analysis tools, and data.Comment: See https://ipv6hitlist.github.io for daily IPv6 hitlists, historical data, and additional analyse

    Investigating human-perceptual properties of "shapes" using 3D shapes and 2D fonts

    Get PDF
    Shapes are generally used to convey meaning. They are used in video games, films and other multimedia, in diverse ways. 3D shapes may be destined for virtual scenes or represent objects to be constructed in the real-world. Fonts add character to an otherwise plain block of text, allowing the writer to make important points more visually prominent or distinct from other text. They can indicate the structure of a document, at a glance. Rather than studying shapes through traditional geometric shape descriptors, we provide alternative methods to describe and analyse shapes, from a lens of human perception. This is done via the concepts of Schelling Points and Image Specificity. Schelling Points are choices people make when they aim to match with what they expect others to choose but cannot communicate with others to determine an answer. We study whole mesh selections in this setting, where Schelling Meshes are the most frequently selected shapes. The key idea behind image Specificity is that different images evoke different descriptions; but ‘Specific’ images yield more consistent descriptions than others. We apply Specificity to 2D fonts. We show that each concept can be learned and predict them for fonts and 3D shapes, respectively, using a depth image-based convolutional neural network. Results are shown for a range of fonts and 3D shapes and we demonstrate that font Specificity and the Schelling meshes concept are useful for visualisation, clustering, and search applications. Overall, we find that each concept represents similarities between their respective type of shape, even when there are discontinuities between the shape geometries themselves. The ‘context’ of these similarities is in some kind of abstract or subjective meaning which is consistent among different people

    Internet-Wide Evaluations of Security and Vulnerabilities

    Get PDF
    The Internet significantly impacts the world culture. Since the beginning, it is a multilayered system, which is even gaining more protocols year by year. At its core, remains the Internet protocol suite, where the fundamental protocols such as IPv4, TCP/UDP, DNS are initially introduced. Recently, more and more cross-layer attacks involving features in multiple layers are reported. To better understand these attacks, e.g. how practical they are and how many users are vulnerable, Internet-wide evaluations are essential. In this cumulative thesis, we summarise our findings from various Internet-wide evaluations in recent years, with a main focus on DNS. Our evaluations showed that IP fragmentation poses a practical threat to DNS security, regardless of the transport protocol (TCP or UDP). Defense mechanisms such as DNS Response Rate Limiting could facilitate attacks on DNS, even if they are designed to protect DNS. We also extended the evaluations to a fundamental system which heavily relies on DNS, the web PKI. We found that Certificate Authorities suffered a lot from previous DNS vulnerabilities. We demonstrated that off-path attackers could hijack accounts at major CAs and manipulate resources there, with various DNS cache poisoning attacks. The Domain Validation procedure faces similar vulnerabilities. Even the latest Multiple-Validation-Agent DV could be downgraded and poisoned. On the other side, we also performed Internet-wide evaluations of two important defence mechanisms. One is the cryptographic protocol for DNS security, called DNSSEC. We found that only less than 2% of popular domains were signed, among which about 20% were misconfigured. This is another example showing how poorly deployed defence mechanisms worsen the security. The other is ingress filtering, which stops spoofed traffic from entering a network. We presented the most completed Internet-wide evaluations of ingress filtering, which covered over 90% of all Autonomous Systems. We found that over 80% of them were vulnerable to inbound spoofing

    Experience in using MTurk for Network Measurement

    No full text
    ABSTRACT Conducting sound measurement studies of the global Internet is inherently difficult. The collected data significantly depends on vantage point(s), sampling strategies, security policies, or measurement populationsand conclusions drawn from the data can be sensitive to these biases. Crowdsourcing is a promising approach to address these challenges, although the epistemological implications have not yet received substantial attention by the research community. We share our findings from leveraging Amazon's Mechanical Turk (MTurk) system for three distinct network measurement tasks. We describe our failure to outsource to MTurk an execution of a security measurement tool, our subsequent successful integration of a simple yet meaningful measurement within a HIT, and finally the successful use of MTurk to quickly provide focused small sample sets that could not be obtained easily via alternate means. Finally, we discuss the implications of our experiences for other crowdsourced measurement research

    Experience in using MTurk for Network Measurement

    No full text
    ABSTRACT Conducting sound measurement studies of the global Internet is inherently difficult. The collected data significantly depends on vantage point(s), sampling strategies, security policies, or measurement populationsand conclusions drawn from the data can be sensitive to these biases. Crowdsourcing is a promising approach to address these challenges, although the epistemological implications have not yet received substantial attention by the research community. We share our findings from leveraging Amazon's Mechanical Turk (MTurk) system for three distinct network measurement tasks. We describe our failure to outsource to MTurk an execution of a security measurement tool, our subsequent successful integration of a simple yet meaningful measurement within a HIT, and finally the successful use of MTurk to quickly provide focused small sample sets that could not be obtained easily via alternate means. Finally, we discuss the implications of our experiences for other crowdsourced measurement research
    corecore