390,262 research outputs found
HLOC: Hints-Based Geolocation Leveraging Multiple Measurement Frameworks
Geographically locating an IP address is of interest for many purposes. There
are two major ways to obtain the location of an IP address: querying commercial
databases or conducting latency measurements. For structural Internet nodes,
such as routers, commercial databases are limited by low accuracy, while
current measurement-based approaches overwhelm users with setup overhead and
scalability issues. In this work we present our system HLOC, aiming to combine
the ease of database use with the accuracy of latency measurements. We evaluate
HLOC on a comprehensive router data set of 1.4M IPv4 and 183k IPv6 routers.
HLOC first extracts location hints from rDNS names, and then conducts
multi-tier latency measurements. Configuration complexity is minimized by using
publicly available large-scale measurement frameworks such as RIPE Atlas. Using
this measurement, we can confirm or disprove the location hints found in domain
names. We publicly release HLOC's ready-to-use source code, enabling
researchers to easily increase geolocation accuracy with minimum overhead.Comment: As published in TMA'17 conference:
http://tma.ifip.org/main-conference
Shortcuts through Colocation Facilities
Network overlays, running on top of the existing Internet substrate, are of
perennial value to Internet end-users in the context of, e.g., real-time
applications. Such overlays can employ traffic relays to yield path latencies
lower than the direct paths, a phenomenon known as Triangle Inequality
Violation (TIV). Past studies identify the opportunities of reducing latency
using TIVs. However, they do not investigate the gains of strategically
selecting relays in Colocation Facilities (Colos). In this work, we answer the
following questions: (i) how Colo-hosted relays compare with other relays as
well as with the direct Internet, in terms of latency (RTT) reductions; (ii)
what are the best locations for placing the relays to yield these reductions.
To this end, we conduct a large-scale one-month measurement of inter-domain
paths between RIPE Atlas (RA) nodes as endpoints, located at eyeball networks.
We employ as relays Planetlab nodes, other RA nodes, and machines in Colos. We
examine the RTTs of the overlay paths obtained via the selected relays, as well
as the direct paths. We find that Colo-based relays perform the best and can
achieve latency reductions against direct paths, ranging from a few to 100s of
milliseconds, in 76% of the total cases; 75% (58% of total cases) of these
reductions require only 10 relays in 6 large Colos.Comment: In Proceedings of the ACM Internet Measurement Conference (IMC '17),
London, GB, 201
ZDNS: A Fast DNS Toolkit for Internet Measurement
Active DNS measurement is fundamental to understanding and improving the DNS
ecosystem. However, the absence of an extensible, high-performance, and
easy-to-use DNS toolkit has limited both the reproducibility and coverage of
DNS research. In this paper, we introduce ZDNS, a modular and open-source
active DNS measurement framework optimized for large-scale research studies of
DNS on the public Internet. We describe ZDNS' architecture, evaluate its
performance, and present two case studies that highlight how the tool can be
used to shed light on the operational complexities of DNS. We hope that ZDNS
will enable researchers to better -- and in a more reproducible manner --
understand Internet behavior.Comment: Proceedings of the 22nd ACM Internet Measurement Conference. 202
I Know Where You are and What You are Sharing: Exploiting P2P Communications to Invade Users' Privacy
In this paper, we show how to exploit real-time communication applications to
determine the IP address of a targeted user. We focus our study on Skype,
although other real-time communication applications may have similar privacy
issues. We first design a scheme that calls an identified targeted user
inconspicuously to find his IP address, which can be done even if he is behind
a NAT. By calling the user periodically, we can then observe the mobility of
the user. We show how to scale the scheme to observe the mobility patterns of
tens of thousands of users. We also consider the linkability threat, in which
the identified user is linked to his Internet usage. We illustrate this threat
by combining Skype and BitTorrent to show that it is possible to determine the
file-sharing usage of identified users. We devise a scheme based on the
identification field of the IP datagrams to verify with high accuracy whether
the identified user is participating in specific torrents. We conclude that any
Internet user can leverage Skype, and potentially other real-time communication
systems, to observe the mobility and file-sharing usage of tens of millions of
identified users.Comment: This is the authors' version of the ACM/USENIX Internet Measurement
Conference (IMC) 2011 pape
A Benchmark for Image Retrieval using Distributed Systems over the Internet: BIRDS-I
The performance of CBIR algorithms is usually measured on an isolated
workstation. In a real-world environment the algorithms would only constitute a
minor component among the many interacting components. The Internet
dramati-cally changes many of the usual assumptions about measuring CBIR
performance. Any CBIR benchmark should be designed from a networked systems
standpoint. These benchmarks typically introduce communication overhead because
the real systems they model are distributed applications. We present our
implementation of a client/server benchmark called BIRDS-I to measure image
retrieval performance over the Internet. It has been designed with the trend
toward the use of small personalized wireless systems in mind. Web-based CBIR
implies the use of heteroge-neous image sets, imposing certain constraints on
how the images are organized and the type of performance metrics applicable.
BIRDS-I only requires controlled human intervention for the compilation of the
image collection and none for the generation of ground truth in the measurement
of retrieval accuracy. Benchmark image collections need to be evolved
incrementally toward the storage of millions of images and that scaleup can
only be achieved through the use of computer-aided compilation. Finally, our
scoring metric introduces a tightly optimized image-ranking window.Comment: 24 pages, To appear in the Proc. SPIE Internet Imaging Conference
200
Neural networks and spectra feature selection for retrival of hot gases temperature profiles
Proceeding of: International Conference on Computational Intelligence for Modelling, Control and Automation, 2005 and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, Vienna, Austria 28-30 Nov. 2005Neural networks appear to be a promising tool to solve the so-called inverse problems focused to obtain a retrieval of certain physical properties related to the radiative transference of energy. In this paper the capability of neural networks to retrieve the temperature profile in a combustion environment is proposed. Temperature profile retrieval will be obtained from the measurement of the spectral distribution of energy radiated by the hot gases (combustion products) at wavelengths corresponding to the infrared region. High spectral resolution is usually needed to gain a certain accuracy in the retrieval process. However, this great amount of information makes mandatory a reduction of the dimensionality of the problem. In this sense a careful selection of wavelengths in the spectrum must be performed. With this purpose principal component analysis technique is used to automatically determine those wavelengths in the spectrum that carry relevant information on temperature distribution. A multilayer perceptron will be trained with the different energies associated to the selected wavelengths. The results presented show that multilayer perceptron combined with principal component analysis is a suitable alternative in this field.Publicad
- …