91 research outputs found

    Assessing the geographic resolution of exhaustive tabulation for geolocating Internet hosts

    Full text link
    peer reviewedGeolocation of Internet hosts relies mainly on exhaustive tabulation techniques. Those techniques consist in building a database, that keeps the mapping between IP blocks and a geographic location. Relying on a single location for a whole IP block requires using a coarse enough geographic resolution. As this geographic resolution is not made explicit in databases, we try in this paper to better understand it by comparing the location estimates of databases with a well-established active measurements-based geolocation technique. We show that the geographic resolution of geolocation databases is far coarser than the resolution provided by active measurements for individual IP addresses. Given the lack of information in databases about the expected location error within each IP block, one cannot havemuch confidence in the accuracy of their location estimates. Geolocation databases should either provide information about the expected accuracy of the location estimates within each block, or reveal information about how their location estimates have been built, unless databases have to be trusted blindly.FP6-FET ANA (FP6-IST- 27489

    Localization of IP stations based on model of probability delay measurement

    Get PDF
    Diplomová práca sa zaoberá metódami lokalizácie staníc na Internete, presnejšie určením geografickej polohy neznámej stanice pripojenej do tejto siete, meraním oneskorenia RTT. Úvodná časť práce je venovaná popisu oneskorení, ktoré môžu v sieti nastať a ich nástrojom na meranie. Následne je časť práce venovaná rozdeleniu a popisu lokalizačných metód pre určenie geografickej polohy hľadanej stanice, ktoré využívajú na určenie polohy už existujúce údaje o hľadanej stanici, a to pasívne metódy alebo metódy využívajúce meranie oneskorenia, teda aktívne metódy. Hlavná časť práce je zameraná na odhad geografickej polohy staníc metódou GeoWeight, ktorá je založená na meraní oneskorenia RTT, vychádzajúca z princípov metódy CBG, ktoré spresňuje zavedením teórie váh podľa pravdepodobnosti výskytu cieľovej stanice. Posledná časť je venovaná návrhu aplikácie pre určenie geografickej polohy cieľovej stanice metódou GeoWeight. Navrhnutá aplikácia je následne otestovaná v experimentálnej sieti PlanetLab, pomocou ktorej prebehlo meranie oneskorenia. V úplnom závere je kapitola porovnávajúca namerané výsledky navrhnutou aplikáciou s ostatnými lokalizačnými metódami (CBG, Octant, SOI, GeoIP) výskumu geolokalizácie.The master thesis is dealing with Internet host localization methods, more exactly with determining geographical position of the unknown Internet host connected to the network using RTT delay measuring. The first part is dealing with description of RTT delays that may occur in the network and tools for their measurement. The next is part of thesis is devoted to description of two kinds of localization methods. Ones that are using existing data to determine the position of Internet host also called passive methods, and others that are using RTT delay measurement, also called active methods. The main part is focused on GeoWeight method which is based on geographical localization estimation of Internet host. It is based on RTT delay measurement using the principles of CBG method, enhanced by introduction of the theory of weights according to the probability of the target Internet host. The last part is describing the application that was made to determine the geographic localization of the target Internet host using GeoWeight method. The application was afterwards tested by measuring RTT delay in PlanetLab experimental network. At the end the final measured results were compared with other localization methods (CBG, Octant, SOI, GeoIP).

    Inferring Network Usage from Passive Measurements in ISP Networks: Bringing Visibility of the Network to Internet Operators

    Get PDF
    The Internet is evolving with us along the time, nowadays people are more dependent of it, being used for most of the simple activities of their lives. It is not uncommon use the Internet for voice and video communications, social networking, banking and shopping. Current trends in Internet applications such as Web 2.0, cloud computing, and the internet of things are bound to bring higher traffic volume and more heterogeneous traffic. In addition, privacy concerns and network security traits have widely promoted the usage of encryption on the network communications. All these factors make network management an evolving environment that becomes every day more difficult. This thesis focuses on helping to keep track on some of these changes, observing the Internet from an ISP viewpoint and exploring several aspects of the visibility of a network, giving insights on what contents or services are retrieved by customers and how these contents are provided to them. Generally, inferring these information, it is done by means of characterization and analysis of data collected using passive traffic monitoring tools on operative networks. As said, analysis and characterization of traffic collected passively is challenging. Internet end-users are not controlled on the network traffic they generate. Moreover, this traffic in the network might be encrypted or coded in a way that is unfeasible to decode, creating the need for reverse engineering for providing a good picture to the Internet operator. In spite of the challenges, it is presented a characterization of P2P-TV usage of a commercial, proprietary and closed application, that encrypts or encodes its traffic, making quite difficult discerning what is going on by just observing the data carried by the protocol. Then it is presented DN-Hunter, which is an application for rendering visible a great part of the network traffic even when encryption or encoding is available. Finally, it is presented a case study of DNHunter for understanding Amazon Web Services, the most prominent cloud provider that offers computing, storage, and content delivery platforms. In this paper is unveiled the infrastructure, the pervasiveness of content and their traffic allocation policies. Findings reveal that most of the content residing on cloud computing and Internet storage infrastructures is served by one single Amazon datacenter located in Virginia despite it appears to be the worst performing one for Italian users. This causes traffic to take long and expensive paths in the network. Since no automatic migration and load-balancing policies are offered by AWS among different locations, content is exposed to outages, as it is observed in the datasets presented

    Internet Protocol Geolocation: Development of a Delay-Based Hybrid Methodology for Locating the Geographic Location of a Network Node

    Get PDF
    Internet Protocol Geolocation (IP Geolocation), the process of determining the approximate geographic location of an IP addressable node, has proven useful in a wide variety of commercial applications. Commercial applications of IP Geolocation include market research, redirection for performance enhancement, restricting content, and combating fraud. The potential for military applications include securing remote access via geographic authentication, intelligence collection, and cyber attack attribution. IP Geolocation methods can be divided into three basic categories based upon what information is used to determine the geographic location of the given IP address: 1) Information contained in databases, 2) information that is leaked during connections with the IP of interest, and 3) network-based routing and timing information. This thesis focuses upon an analysis in the third category: delay-based methods. Specifically, a comparative analysis of the three existing delay-based IP Geolocation methods: Upperbound Multilateration (UBM), Constraint Based Geolocation (CBG), and Time to Location Heuristic (TTLH) is conducted. Based upon analysis of the results, a new hybrid methodology is proposed that combines the three existing methods to improve the accuracy when conducting IP Geolocation. Simulations results showed that the new hybrid methodology TTLH method improved the success rate from 80.15% to 91.66% when compared to the shotgun TTLH method

    Active IP Geolocation for Verification Host Position in Internet

    Get PDF
    Dizertační práce se zabývá způsoby nalezení geografické polohy zařízení v síti Internet při znalosti IP adresy. Tento proces se nazývá IP geolokace a je v současnosti řešen pomocí geolokačních databází nebo za využití výsledků měření síťových parametrů k cílové IP adrese. Nevýhodou dnešních geolokačních databází je, že některé poskytované polohy nejsou správné a mohou vykazovat velkou odchylku od správné polohy. Cílem této práce je vyvinout metodu, která by na základě měření dokázala ověřit správnost pozice z geolokační databáze. Z tohoto důvodu je v práci podrobně rozebrán vliv parciálních částí zpoždění, které ovlivňují výpočet maximální vzdálenosti na základě změřeného zpoždění mezi referenční stanicí a cílovou IP adresou. Ze stejného důvodu je v práci popsáno dlouhodobé měření zpoždění, kde je řešena přesnost IP geolokace za použití kalibračních dat z dřívějších měření. Navržená metoda Cable Length Based Geolocalisation (CLBG) je postavena na vlastnostech dílčích složek zpoždění, které jsou závislé na délce přenosových médií. Metoda ze změřeného obousměrného zpoždění vyloučí vliv zpoždění generovaného mezilehlými prvky a koncovými stanicemi a za použití rychlosti šíření signálu přenosovým médiem určí geografickou vzdálenost. Dále byl experimentálně zjištěn parametr nepřímého vedení kabelů, jež je použit pro určení mezních hranic. Průnik mezních hranic jednotlivých referenčních bodů je následně použit ke stanovení regionu, kde se IP adresa nachází. Výsledky této metody při geolokaci jsou lepší než jednoduché metody (ShortestPing, GeoPing a SOI) a srovnatelné s metodami pokročilejšími (CBG a Octant). Nevýhodou vytvořené metody je velikost regionu, kde se stanice nachází, což je ale dáno jejím účelem. Pro zjištění správnosti informace z geolokační databáze slouží ověření, zda její pozice leží ve zmíněném regionu.Dissertation thesis deals with methods for finding the location of the device in the Internet, based on knowledge of the IP address. The process is called IP geolocation and is currently solved by geolocation databases or by measurement of network properties to the IP address. The disadvantage of nowadays geolocation databases is an incorrect information about some locations, because they can be in large distance from correct position. The aim of the thesis is to develop a method for verification of a position from geolocation database using delay measurement. Because of it, there is a detail analysis of influence of partial delays on the distance estimation accuracy, calculated using measured delay between the landmark and the target IP address. For the same reason, long-term delay measurement was performed, where the IP geolocation accuracy was compared using calibration data from previous measurements. On this background, Cable Length Based Geolocalisation (CLBG) method is proposed. Principle of this method is built on the properties of partial delays, which depend on the length of transport media. Firstly, the method measures round trip time (rtt), which is subsequently lowered by intermediate devices and end stations delay. The geographical distance is estimated using signal speed in the transport media. Further, the winding media parameter is established, which is used to determine a constraint around the landmark. The intersection of all constraints defines the area, where the target IP is. The IP geolocation using CLBG gives better results than simpler methods (ShortestPing, GeoPing and SOI), in comparison with more advanced methods (CBG and Octant) the accuracy is similar. The disadvantage of the CLBG method is the size of region, where the target lies, but this is due to its purpose. The position found in geolocation database can be checked by evaluation if it lies in the region.

    Enhancing User Experience by Extracting Application Intelligence from Network Traffic

    Full text link
    Internet Service Providers (ISPs) continue to get complaints from users on poor experience for diverse Internet applications ranging from video streaming and gaming to social media and teleconferencing. Identifying and rectifying the root cause of these experience events requires the ISP to know more than just coarse-grained measures like link utilizations and packet losses. Application classification and experience measurement using traditional deep packet inspection (DPI) techniques is starting to fail with the increasing adoption of traffic encryption and is not cost-effective with the explosive growth in traffic rates. This thesis leverages the emerging paradigms of machine learning and programmable networks to design and develop systems that can deliver application-level intelligence to ISPs at scale, cost, and accuracy that has hitherto not been achieved before. This thesis makes four new contributions. Our first contribution develops a novel transformer-based neural network model that classifies applications based on their traffic shape, agnostic to encryption. We show that this approach has over 97% f1-score for diverse application classes such as video streaming and gaming. Our second contribution builds and validates algorithmic and machine learning models to estimate user experience metrics for on-demand and live video streaming applications such as bitrate, resolution, buffer states, and stalls. For our third contribution, we analyse ten popular latency-sensitive online multiplayer games and develop data structures and algorithms to rapidly and accurately detect each game using automatically generated signatures. By combining this with active latency measurement and geolocation analysis of the game servers, we help ISPs determine better routing paths to reduce game latency. Our fourth and final contribution develops a prototype of a self-driving network that autonomously intervenes just-in-time to alleviate the suffering of applications that are being impacted by transient congestion. We design and build a complete system that extracts application-aware network telemetry from programmable switches and dynamically adapts the QoS policies to manage the bottleneck resources in an application-fair manner. We show that it outperforms known queue management techniques in various traffic scenarios. Taken together, our contributions allow ISPs to measure and tune their networks in an application-aware manner to offer their users the best possible experience

    Leveraging TV white apace to monitor game conservation environments

    Get PDF
    A Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Mobile Telecommunications and Innovation (MSc. MTI)Installation of camera-traps by the conservancies has been gaining interest in the recent years here in Kenya. This is due to the increased scientific need to carry out wildlife research and also monitor the movement patterns of the wild game as a way of helping to address issues such as human-wildlife conflict and poaching. This is also gaining traction by the safari camps to enhance customer experience. The implementation of these camera-traps poses a limitation of remotely accessing the camera feeds. This is majorly caused by a challenge of connectivity as many of these game environments are located in rural environments of Kenya. The focus of this study was to find out and establish the best approach of implementing a camera-trap that allows remote access of feeds in the game environments while leveraging on the connectivity that can be provided through deployment of Television (TV) White Space network. Through the use of questionnaires, an online survey was conducted in a select conservancy and a safari camp to investigate the challenges and the technology state within these environments that limit the adoption of networked game cameras. Various secondary sources were also studied to understand the existing connectivity technologies in the realm of the Internet of Things (IoT). The study used a combination of hardware and software technologies in realising the model in a TV White Space environment. A networked game camera prototype that delivers video feeds on a remote mobile interface was developed. The camera prototype utilised a programmed Raspberry Pi camera and the System-On-Chip to relay the gathered feeds in real-time to the mobile interface. The mobile interface developed in this case was an Android-based mobile-web. This was tested by ordinary users in a Wi-Fi environment, TV White Space connectivity experts and conservation officers

    Efficient algorithms for passive network measurement

    Get PDF
    Network monitoring has become a necessity to aid in the management and operation of large networks. Passive network monitoring consists of extracting metrics (or any information of interest) by analyzing the traffic that traverses one or more network links. Extracting information from a high-speed network link is challenging, given the great data volumes and short packet inter-arrival times. These difficulties can be alleviated by using extremely efficient algorithms or by sampling the incoming traffic. This work improves the state of the art in both these approaches. For one-way packet delay measurement, we propose a series of improvements over a recently appeared technique called Lossy Difference Aggregator. A main limitation of this technique is that it does not provide per-flow measurements. We propose a data structure called Lossy Difference Sketch that is capable of providing such per-flow delay measurements, and, unlike recent related works, does not rely on any model of packet delays. In the problem of collecting measurements under the sliding window model, we focus on the estimation of the number of active flows and in traffic filtering. Using a common approach, we propose one algorithm for each problem that obtains great accuracy with significant resource savings. In the traffic sampling area, the selection of the sampling rate is a crucial aspect. The most sensible approach involves dynamically adjusting sampling rates according to network traffic conditions, which is known as adaptive sampling. We propose an algorithm called Cuckoo Sampling that can operate with a fixed memory budget and perform adaptive flow-wise packet sampling. It is based on a very simple data structure and is computationally extremely lightweight. The techniques presented in this work are thoroughly evaluated through a combination of theoretical and experimental analysis.Postprint (published version
    corecore