2,391 research outputs found

    The End of the Canonical IoT Botnet: A Measurement Study of Mirai's Descendants

    Full text link
    Since the burgeoning days of IoT, Mirai has been established as the canonical IoT botnet. Not long after the public release of its code, researchers found many Mirai variants compete with one another for many of the same vulnerable hosts. Over time, the myriad Mirai variants evolved to incorporate unique vulnerabilities, defenses, and regional concentrations. In this paper, we ask: have Mirai variants evolved to the point that they are fundamentally distinct? We answer this question by measuring two of the most popular Mirai descendants: Hajime and Mozi. To actively scan both botnets simultaneously, we developed a robust measurement infrastructure, BMS, and ran it for more than eight months. The resulting datasets show that these two popular botnets have diverged in their evolutions from their common ancestor in multiple ways: they have virtually no overlapping IP addresses, they exhibit different behavior to network events such as diurnal rate limiting in China, and more. Collectively, our results show that there is no longer one canonical IoT botnet. We discuss the implications of this finding for researchers and practitioners

    Measuring and Evading Turkmenistan's Internet Censorship: A Case Study in Large-Scale Measurements of a Low-Penetration Country

    Full text link
    Since 2006, Turkmenistan has been listed as one of the few Internet enemies by Reporters without Borders due to its extensively censored Internet and strictly regulated information control policies. Existing reports of filtering in Turkmenistan rely on a small number of vantage points or test a small number of websites. Yet, the country's poor Internet adoption rates and small population can make more comprehensive measurement challenging. With a population of only six million people and an Internet penetration rate of only 38%, it is challenging to either recruit in-country volunteers or obtain vantage points to conduct remote network measurements at scale. We present the largest measurement study to date of Turkmenistan's Web censorship. To do so, we developed TMC, which tests the blocking status of millions of domains across the three foundational protocols of the Web (DNS, HTTP, and HTTPS). Importantly, TMC does not require access to vantage points in the country. We apply TMC to 15.5M domains, our results reveal that Turkmenistan censors more than 122K domains, using different blocklists for each protocol. We also reverse-engineer these censored domains, identifying 6K over-blocking rules causing incidental filtering of more than 5.4M domains. Finally, we use Geneva, an open-source censorship evasion tool, to discover five new censorship evasion strategies that can defeat Turkmenistan's censorship at both transport and application layers. We will publicly release both the data collected by TMC and the code for censorship evasion.Comment: To appear in Proceedings of The 2023 ACM Web Conference (WWW 2023

    Understanding the Role of Registrars in DNSSEC Deployment

    Get PDF
    The Domain Name System (DNS) provides a scalable, flexible name resolution service. Unfortunately, its unauthenticated architecture has become the basis for many security attacks. To address this, DNS Security Extensions (DNSSEC) were introduced in 1997. DNSSEC’s deployment requires support from the top-level domain (TLD) registries and registrars, as well as participation by the organization that serves as the DNS operator. Unfortunately, DNSSEC has seen poor deployment thus far: despite being proposed nearly two decades ago, only 1% of .com, .net, and .org domains are properly signed. In this paper, we investigate the underlying reasons why DNSSEC adoption has been remarkably slow. We focus on registrars, as most TLD registries already support DNSSEC and registrars often serve as DNS operators for their customers. Our study uses large-scale, longitudinal DNS measurements to study DNSSEC adoption, coupled with experiences collected by trying to deploy DNSSEC on domains we purchased from leading domain name registrars and resellers. Overall, we find that a select few registrars are responsible for the (small) DNSSEC deployment today, and that many leading registrars do not support DNSSEC at all, or require customers to take cumbersome steps to deploy DNSSEC. Further frustrating deployment, many of the mechanisms for conveying DNSSEC information to registrars are error-prone or present security vulnerabilities. Finally, we find that using DNSSEC with third-party DNS operators such as Cloudflare requires the domain owner to take a number of steps that 40% of domain owners do not complete. Having identified several operational challenges for full DNSSEC deployment, we make recommendations to improve adoption

    BitTorrent is an Auction: Analyzing and Improving BitTorrent’s Incentives, in:

    Get PDF
    ABSTRACT Incentives play a crucial role in BitTorrent, motivating users to upload to others to achieve fast download times for all peers. Though long believed to be robust to strategic manipulation, recent work has empirically shown that BitTorrent does not provide its users incentive to follow the protocol. We propose an auction-based model to study and improve upon BitTorrent's incentives. The insight behind our model is that BitTorrent uses, not tit-for-tat as widely believed, but an auction to decide which peers to serve. Our model not only captures known, performance-improving strategies, it shapes our thinking toward new, effective strategies. For example, our analysis demonstrates, counter-intuitively, that BitTorrent peers have incentive to intelligently under-report what pieces of the file they have to their neighbors. We implement and evaluate a modification to BitTorrent in which peers reward one another with proportional shares of bandwidth. Within our game-theoretic model, we prove that a proportional-share client is strategy-proof. With experiments on PlanetLab, a local cluster, and live downloads, we show that a proportional-share unchoker yields faster downloads against BitTorrent and BitTyrant clients, and that underreporting pieces yields prolonged neighbor interest

    Evaluating the promise and pitfalls of a potential climate change–tolerant sea urchin fishery in southern California

    Get PDF
    Marine fishery stakeholders are beginning to consider and implement adaptation strategies in the face of growing consumer demand and potential deleterious climate change impacts such as ocean warming, ocean acidification, and deoxygenation. This study investigates the potential for development of a novel climate change-tolerant sea urchin fishery in southern California based on Strongylocentrotus fragilis (pink sea urchin), a deep-sea species whose peak density was found to coincide with a current trap-based spot prawn fishery (Pandalus platyceros) in the 200–300-m depth range. Here we outline potential criteria for a climate change-tolerant fishery by examining the distribution, life-history attributes, and marketable qualities of S. fragilis in southern California. We provide evidence of seasonality of gonad production and demonstrate that peak gonad production occurs in the winter season. S. fragilis likely spawns in the spring season as evidenced by consistent minimum gonad indices in the spring/summer seasons across 4 years of sampling (2012–2016). The resiliency of S. fragilis to predicted future increases in acidity and decreases in oxygen was supported by high species abundance, albeit reduced relative growth rate estimates at water depths (485–510 m) subject to low oxygen (11.7–16.9 µmol kg−1) and pHTotal (<7.44), which may provide assurances to stakeholders and managers regarding the suitability of this species for commercial exploitation. Some food quality properties of the S. fragilis roe (e.g. colour, texture) were comparable with those of the commercially exploited shallow-water red sea urchin (Mesocentrotus franciscanus), while other qualities (e.g. 80% reduced gonad size by weight) limit the potential future marketability of S. fragilis. This case study highlights the potential future challenges and drawbacks of climate-tolerant fishery development in an attempt to inform future urchin fishery stakeholders

    A Secure DHT via the Pigeonhole Principle

    Get PDF
    The standard Byzantine attack model assumes no more than some fixed fraction of the participants are faulty. This assumption does not accurately apply to peer-to-peer settings, where Sybil attacks and botnets are realistic threats. We propose an attack model that permits an arbitrary number of malicious nodes under the assumption that each node can be classified based on some of its attributes, such as autonomous system number or operating system, and that the number of classes with malicious nodes is bounded (e.g., an attacker may exploit at most a few operating systems at a time). In this model, we present a secure DHT, evilTwin, which replaces a single, large DHT with sufficiently many smaller instances such that it is impossible for an adversary to corrupt every instance. Our system ensures high availability and low-latency lookups, is easy to implement, does not require a complex Byzantine agreement protocol, and its proof of security is a straightforward application of the pigeonhole principle. The cost of security comes in the form of increased storage and bandwidth overhead; we show how to reduce these costs by replicating data and adaptively querying participants who historically perform well. We use implementation and simulation to show that evilTwin imposes a relatively small additional cost compared to conventional DHTs
    • …
    corecore