1,340 research outputs found

    Shortcuts through Colocation Facilities

    Full text link
    Network overlays, running on top of the existing Internet substrate, are of perennial value to Internet end-users in the context of, e.g., real-time applications. Such overlays can employ traffic relays to yield path latencies lower than the direct paths, a phenomenon known as Triangle Inequality Violation (TIV). Past studies identify the opportunities of reducing latency using TIVs. However, they do not investigate the gains of strategically selecting relays in Colocation Facilities (Colos). In this work, we answer the following questions: (i) how Colo-hosted relays compare with other relays as well as with the direct Internet, in terms of latency (RTT) reductions; (ii) what are the best locations for placing the relays to yield these reductions. To this end, we conduct a large-scale one-month measurement of inter-domain paths between RIPE Atlas (RA) nodes as endpoints, located at eyeball networks. We employ as relays Planetlab nodes, other RA nodes, and machines in Colos. We examine the RTTs of the overlay paths obtained via the selected relays, as well as the direct paths. We find that Colo-based relays perform the best and can achieve latency reductions against direct paths, ranging from a few to 100s of milliseconds, in 76% of the total cases; 75% (58% of total cases) of these reductions require only 10 relays in 6 large Colos.Comment: In Proceedings of the ACM Internet Measurement Conference (IMC '17), London, GB, 201

    Measuring the Relationships between Internet Geography and RTT

    Get PDF
    When designing distributed systems and Internet protocols, designers can benefit from statistical models of the Internet that can be used to estimate their performance. However, it is frequently impossible for these models to include every property of interest. In these cases, model builders have to select a reduced subset of network properties, and the rest will have to be estimated from those available. In this paper we present a technique for the analysis of Internet round trip times (RTT) and its relationship with other geographic and network properties. This technique is applied on a novel dataset comprising ∼19 million RTT measurements derived from ∼200 million RTT samples between ∼54 thousand DNS servers. Our main contribution is an information-theoretical analysis that allows us to determine the amount of information that a given subset of geographic or network variables (such as RTT or great circle distance between geolocated hosts) gives about other variables of interest. We then provide bounds on the error that can be expected when using statistical estimators for the variables of interest based on subsets of other variables

    Smartphone-based geolocation of Internet hosts

    Get PDF
    The location of Internet hosts is frequently used in distributed applications and networking services. Examples include customized advertising, distribution of content, and position-based security. Unfortunately the relationship between an IP address and its position is in general very weak. This motivates the study of measurement-based IP geolocation techniques, where the position of the target host is actively estimated using the delays between a number of landmarks and the target itself. This paper discusses an IP geolocation method based on crowdsourcing where the smartphones of users operate as landmarks. Since smartphones rely on wireless connections, a specific delay-distance model was derived to capture the characteristics of this novel operating scenario

    How to Catch when Proxies Lie: Verifying the Physical Locations of Network Proxies with Active Geolocation

    Get PDF
    Internet users worldwide rely on commercial network proxies both to conceal their true location and identity, and to control their apparent location. Their reasons range from mundane to security-critical. Proxy operators offer no proof that their advertised server locations are accurate. IP-to-location databases tend to agree with the advertised locations, but there have been many reports of serious errors in such databases. In this study we estimate the locations of 2269 proxy servers from ping-time measurements to hosts in known locations, combined with AS and network information. These servers are operated by seven proxy services, and, according to the operators, spread over 222 countries and territories. Our measurements show that one-third of them are definitely not located in the advertised countries, and another third might not be. Instead, they are concentrated in countries where server hosting is cheap and reliable (e.g. Czech Republic, Germany, Netherlands, UK, USA). In the process, we address a number of technical challenges with applying active geolocation to proxy servers, which may not be directly pingable, and may restrict the types of packets that can be sent through them, e.g. forbidding traceroute. We also test three geolocation algorithms from previous literature, plus two variations of our own design, at the scale of the whole world

    Meeting Real-Time Constraint of Spectrum Management in TV Black-Space Access

    Get PDF
    The TV set feedback feature standardized in the next generation TV system, ATSC 3.0, would enable opportunistic access of active TV channels in future Cognitive Radio Networks. This new dynamic spectrum access approach is named as black-space access, as it is complementary of current TV white space, which stands for inactive TV channels. TV black-space access can significantly increase the available spectrum of Cognitive Radio Networks in populated urban markets, where spectrum shortage is most severe while TV whitespace is very limited. However, to enable TV black-space access, secondary user has to evacuate a TV channel in a timely manner when TV user comes in. Such strict real-time constraint is an unique challenge of spectrum management infrastructure of Cognitive Radio Networks. In this paper, the real-time performance of spectrum management with regard to the degree of centralization of infrastructure is modeled and tested. Based on collected empirical network latency and database response time, we analyze the average evacuation time under four structures of spectrum management infrastructure: fully distribution, city-wide centralization, national-wide centralization, and semi-national centralization. The results show that national wide centralization may not meet the real-time requirement, while semi-national centralization that use multiple co-located independent spectrum manager can achieve real-time performance while keep most of the operational advantage of fully centralized structure.Comment: 9 pages, 7 figures, Technical Repor

    Characterizing the Role of Power Grids in Internet Resilience

    Full text link
    Among critical infrastructures, power grids and communication infrastructure are identified as uniquely critical since they enable the operation of all other sectors. Due to their vital role, the research community has undertaken extensive efforts to understand the complex dynamics and resilience characteristics of these infrastructures, albeit independently. However, power and communication infrastructures are also interconnected, and the nature of the Internet's dependence on power grids is poorly understood. In this paper, we take the first step toward characterizing the role of power grids in Internet resilience by analyzing the overlap of global power and Internet infrastructures. We investigate the impact of power grid failures on Internet availability and find that nearly 65%65\% of the public Internet infrastructure components are concentrated in a few (<10< 10) power grid failure zones. More importantly, power grid dependencies severely limit the number of disjoint availability zones of cloud providers. When dependency on grids serving data center locations is taken into account, the number of isolated AWS Availability Zones reduces from 87 to 19. Building upon our findings, we develop NetWattZap, an Internet resilience analysis tool that generates power grid dependency-aware deployment suggestions for Internet infrastructure and application components, which can also take into account a wide variety of user requirements
    • …
    corecore