3,036 research outputs found

    ATP: a Datacenter Approximate Transmission Protocol

    Full text link
    Many datacenter applications such as machine learning and streaming systems do not need the complete set of data to perform their computation. Current approximate applications in datacenters run on a reliable network layer like TCP. To improve performance, they either let sender select a subset of data and transmit them to the receiver or transmit all the data and let receiver drop some of them. These approaches are network oblivious and unnecessarily transmit more data, affecting both application runtime and network bandwidth usage. On the other hand, running approximate application on a lossy network with UDP cannot guarantee the accuracy of application computation. We propose to run approximate applications on a lossy network and to allow packet loss in a controlled manner. Specifically, we designed a new network protocol called Approximate Transmission Protocol, or ATP, for datacenter approximate applications. ATP opportunistically exploits available network bandwidth as much as possible, while performing a loss-based rate control algorithm to avoid bandwidth waste and re-transmission. It also ensures bandwidth fair sharing across flows and improves accurate applications' performance by leaving more switch buffer space to accurate flows. We evaluated ATP with both simulation and real implementation using two macro-benchmarks and two real applications, Apache Kafka and Flink. Our evaluation results show that ATP reduces application runtime by 13.9% to 74.6% compared to a TCP-based solution that drops packets at sender, and it improves accuracy by up to 94.0% compared to UDP

    Mathematics and the Internet: A Source of Enormous Confusion and Great Potential

    Get PDF
    Graph theory models the Internet mathematically, and a number of plausible mathematically intersecting network models for the Internet have been developed and studied. Simultaneously, Internet researchers have developed methodology to use real data to validate, or invalidate, proposed Internet models. The authors look at these parallel developments, particularly as they apply to scale-free network models of the preferential attachment type

    Network strategies for the new economy

    Get PDF
    In this paper we argue that the pace and scale of development in the information and communication technology industries (ICT) has had and continues to have major effects on the industry economics and competitive dynamics generally. We maintain that the size of changes in demand and supply conditions is forcing companies to make significant changes in the way they conceive and implement their strategies. We decompose the ICT industries into four levels, technology standards, supply chains, physical platforms, and consumer networks. The nature of these technologies and their cost characteristics coupled with higher degrees of knowledge specialisation is impelling companies to radical revisions of their attitudes towards cooperation and co-evolution with suppliers and customers. Where interdependencies between customers are particularly strong, we anticipate the possibility of winner-takes-all strategies. In these circumstances industry risks become very high and there will be significant consequences for competitive markets

    Deconstructing the Blockchain to Approach Physical Limits

    Get PDF
    Transaction throughput, confirmation latency and confirmation reliability are fundamental performance measures of any blockchain system in addition to its security. In a decentralized setting, these measures are limited by two underlying physical network attributes: communication capacity and speed-of-light propagation delay. Existing systems operate far away from these physical limits. In this work we introduce Prism, a new proof-of-work blockchain protocol, which can achieve 1) security against up to 50% adversarial hashing power; 2) optimal throughput up to the capacity C of the network; 3) confirmation latency for honest transactions proportional to the propagation delay D, with confirmation error probability exponentially small in CD ; 4) eventual total ordering of all transactions. Our approach to the design of this protocol is based on deconstructing the blockchain into its basic functionalities and systematically scaling up these functionalities to approach their physical limits.Comment: Computer and Communications Security, 201

    Assessing the Privacy Benefits of Domain Name Encryption

    Full text link
    As Internet users have become more savvy about the potential for their Internet communication to be observed, the use of network traffic encryption technologies (e.g., HTTPS/TLS) is on the rise. However, even when encryption is enabled, users leak information about the domains they visit via DNS queries and via the Server Name Indication (SNI) extension of TLS. Two recent proposals to ameliorate this issue are DNS over HTTPS/TLS (DoH/DoT) and Encrypted SNI (ESNI). In this paper we aim to assess the privacy benefits of these proposals by considering the relationship between hostnames and IP addresses, the latter of which are still exposed. We perform DNS queries from nine vantage points around the globe to characterize this relationship. We quantify the privacy gain offered by ESNI for different hosting and CDN providers using two different metrics, the k-anonymity degree due to co-hosting and the dynamics of IP address changes. We find that 20% of the domains studied will not gain any privacy benefit since they have a one-to-one mapping between their hostname and IP address. On the other hand, 30% will gain a significant privacy benefit with a k value greater than 100, since these domains are co-hosted with more than 100 other domains. Domains whose visitors' privacy will meaningfully improve are far less popular, while for popular domains the benefit is not significant. Analyzing the dynamics of IP addresses of long-lived domains, we find that only 7.7% of them change their hosting IP addresses on a daily basis. We conclude by discussing potential approaches for website owners and hosting/CDN providers for maximizing the privacy benefits of ESNI.Comment: In Proceedings of the 15th ACM Asia Conference on Computer and Communications Security (ASIA CCS '20), October 5-9, 2020, Taipei, Taiwa

    UKAIRO: internet-scale bandwidth detouring

    Get PDF
    The performance of content distribution on the Internet is crucial for many services. While popular content can be delivered efficiently to users by caching it using content delivery networks, the distribution of less popular content is often constrained by the bandwidth of the Internet path between the content server and the client. Neither can influence the selected path and therefore clients may have to download content along a path that is congested or has limited capacity. We describe UKAIRO, a system that reduces Internet download times by using detour paths with higher TCP throughput. UKAIRO first discovers detour paths among an overlay network of potential detour hosts and then transparently diverts HTTP connections via these hosts to improve the throughput of clients downloading from content servers. Our evaluation shows that by performing infrequent bandwidth measurements between 50 randomly selected PlanetLab hosts, UKAIRO can identify and exploit potential detour paths that increase the median bandwidth to public Internet web servers by up to 80%

    A Comparison of the Ecology of Resident and Translocated Beavers Used for Passive Restoration in Degraded Desert Rivers

    Get PDF
    Ecosystem engineers are species that create, destroy, modify, or maintain habitat. As ecosystem engineers, beavers have the potential to assist in stream restoration. Translocation is the capture and relocation of an animal to another area. Translocation of nuisance beavers has become a popular method to reduce human-wildlife conflict and restore waterways. However, few projects monitor beavers after release and compare behavior to naturally occurring resident beavers. Translocations to desert rivers are also rare. We captured, tagged, and monitored 47 beavers which we translocated to desert river restoration sites on the Price and San Rafael Rivers, Utah, USA. We compared translocated beaver behavior and activity to 24 resident beavers we also captured and tagged for monitoring. We found high survival rates for resident adult beavers and lower survival rates for resident subadult, translocated adult, and translocated subadult beavers. There were many more river reaches with dams after beaver translocations than before translocations, although we were unable to determine which beavers were responsible for dam building. In general, resident subadult and translocated adult and subadult beavers used ten times longer stretches of river than resident adult beavers. Translocated and resident subadult beavers moved farther from release sites and faster than resident adult beavers in the first six months after release. In contrast, all beavers had similar short-term activity levels, indicating day-to-day activities such as searching for food and resting may not be changed by translocation. Our findings suggest translocated beavers exhibited survival rates, dam building behavior, and movement patterns most similar to resident subadult beavers during dispersal, which is the movement away from the location where a beaver was born. Many translocated beavers left the study sites in search of a suitable area in which to settle, but even those beavers that left the restoration areas may still be benefiting other degraded stretches of river. Further, translocations led to additional beaver dams in the restoration sites, the common goal of beaver-assisted restoration. Low probability of staying near release sites, a high death rate, and wide-ranging movement patterns should be anticipated when translocating beavers. Multiple beaver releases at targeted restoration sites may eventually result in some settlement and dam-building. Resident beavers did not appear to be negatively affected by translocated beavers introduced into the rivers, indicating that translocations can be used to increase low beaver populations to potentially help reach restoration goals more quickly. Improving methods of restoring healthy ecosystems, such as beaver-assisted restoration, is important to maintaining diverse, abundant life globally
    • ā€¦
    corecore