191 research outputs found

    Who’s Got Your Mail?:Characterizing Mail Service Provider Usage

    Get PDF
    E-mail has long been a critical component of daily communication and the core medium for modern business correspondence. While traditionally e-mail service was provisioned and implemented independently by each Internet-connected organization, increasingly this function has been outsourced to third-party services. As with many pieces of key communications infrastructure, such centralization can bring both economies of scale and shared failure risk. In this paper, we investigate this issue empirically --- providing a large-scale measurement and analysis of modern Internet e-mail service provisioning. We develop a reliable methodology to better map domains to mail service providers. We then use this approach to document the dominant and increasing role played by a handful of mail service providers and hosting companies over the past four years. Finally, we briefly explore the extent to which nationality (and hence legal jurisdiction) plays a role in such mail provisioning decisions

    Forward Pass: On the Security Implications of Email Forwarding Mechanism and Policy

    Full text link
    The critical role played by email has led to a range of extension protocols (e.g., SPF, DKIM, DMARC) designed to protect against the spoofing of email sender domains. These protocols are complex as is, but are further complicated by automated email forwarding -- used by individual users to manage multiple accounts and by mailing lists to redistribute messages. In this paper, we explore how such email forwarding and its implementations can break the implicit assumptions in widely deployed anti-spoofing protocols. Using large-scale empirical measurements of 20 email forwarding services (16 leading email providers and four popular mailing list services), we identify a range of security issues rooted in forwarding behavior and show how they can be combined to reliably evade existing anti-spoofing controls. We show how this allows attackers to not only deliver spoofed email messages to prominent email providers (e.g., Gmail, Microsoft Outlook, and Zoho), but also reliably spoof email on behalf of tens of thousands of popular domains including sensitive domains used by organizations in government (e.g., state.gov), finance (e.g., transunion.com), law (e.g., perkinscoie.com) and news (e.g., washingtonpost.com) among others

    Characterization of Anycast Adoption in the DNS Authoritative Infrastructure

    Get PDF
    Anycast has proven to be an effective mechanism to enhance resilience in the DNS ecosystem and for scaling DNS nameserver capacity, both in authoritative and the recursive resolver infrastructure. Since its adoption for root servers, anycast has mitigated the impact of failures and DDoS attacks on the DNS ecosystem. In this work, we quantify the adoption of anycast to support authoritative domain name service for top- level and second-level domains (TLDs and SLDs). Comparing two comprehensive anycast census datasets in 2017 and 2021, with DNS measurements captured over the same period, reveals that anycast adoption is increasing, driven by a few large operators. While anycast offers compelling resilience advantage, it also shifts some resilience risk to other aspects of the infrastructure. We discuss these aspects, and how the pervasive use of anycast merits a re-evaluation of how to measure DNS resilience

    MPIWiz: subgroup reproducible replay of MPI applications

    Get PDF
    ABSTRACT Message Passing Interface (MPI) is a widely used standard for managing coarse-grained concurrency on distributed computers. Debugging parallel MPI applications, however, has always been a particularly challenging task due to their high degree of concurrent execution and non-deterministic behavior. Deterministic replay is a potentially powerful technique for addressing these challenges, with existing MPI replay tools adopting either data-replay or orderreplay approaches. Unfortunately, each approach has its tradeoffs. Data-replay generates substantial log sizes by recording every communication message. Order-replay generates small logs, but requires all processes to be replayed together. We believe that these drawbacks are the primary reasons that inhibit the wide adoption of deterministic replay as the critical enabler of cyclic debugging of MPI applications. This paper describes subgroup reproducible replay (SRR), a hybrid deterministic replay method that provides the benefits of both data-replay and order-replay while balancing their trade-offs. SRR divides all processes into disjoint groups. It records the contents of messages crossing group boundaries as in data-replay, but records just message orderings for communication within a group as in order-replay. In this way, SRR can exploit the communication locality of traffic patterns in MPI applications. During replay, developers can then replay each group individually. SRR reduces recording overhead by not recording intra-group communication, and at the same time reduces replay overhead by limiting the size of each replay group. Exposing these tradeoffs gives the user the necessary control for making deterministic replay practical for MPI applications. We have implemented a prototype, MPIWiz, to demonstrate and evaluate SRR. MPIWiz employs a replay framework that allows transparent binary instrumentation of both library and system calls. As a result, MPIWiz replays MPI applications with no source code modification and relinking, and handles non-determinism in both MPI and OS system calls. Our preliminary results show that MPIWiz can reduce recording overhead by over a factor of four relative to data-replay, yet without requiring the entire application to be replayed as in order-replay. Recording increases execution time by 27% while the application can be replayed in just 53% of its base execution time

    The GEOTRACES Intermediate Data Product 2014

    Get PDF
    The GEOTRACES Intermediate Data Product 2014 (IDP2014) is the first publicly available data product of the international GEOTRACES programme, and contains data measured and quality controlled before the end of 2013. It consists of two parts: (1) a compilation of digital data for more than 200 trace elements and isotopes (TEIs) as well as classical hydrographic parameters, and (2) the eGEOTRACES Electronic Atlas providing a strongly inter-linked on-line atlas including more than 300 section plots and 90 animated 3D scenes. The IDP2014 covers the Atlantic, Arctic, and Indian oceans, exhibiting highest data density in the Atlantic. The TEI data in the IDP2014 are quality controlled by careful assessment of intercalibration results and multi-laboratory data comparisons at cross-over stations. The digital data are provided in several formats, including ASCII spreadsheet, Excel spreadsheet, netCDF, and Ocean Data View collection. In addition to the actual data values the IDP2014 also contains data quality flags and 1-? data error values where available. Quality flags and error values are useful for data filtering. Metadata about data originators, analytical methods and original publications related to the data are linked to the data in an easily accessible way. The eGEOTRACES Electronic Atlas is the visual representation of the IDP2014 data providing section plots and a new kind of animated 3D scenes. The basin-wide 3D scenes allow for viewing of data from many cruises at the same time, thereby providing quick overviews of large-scale tracer distributions. In addition, the 3D scenes provide geographical and bathymetric context that is crucial for the interpretation and assessment of observed tracer plumes, as well as for making inferences about controlling processes
    • 

    corecore