12 research outputs found

    Повышение точности IP-геолокации на основе данных, предоставляемых открытыми IP-геосервисами

    Get PDF
    IP-геолокация – это процесс определения реального географического положения электронного устройства, подключенного к сети Интернет, по его глобальному сетевому адресу [1]. В настоящее время она нашла широкое применение в интернет-торговле, маркетинге и рекламе, информационной безопасности [2] и других направлениях человеческой деятельности. Применяются различные подходы к определению местоположения удаленного сетевого устройства, различающиеся как по типу анализируемой информации (задержка передачи пакетов, ресурсные записи DNS-серверов, контент веб-страниц), так и по выдаваемому результату (название страны или города, почтовый адрес, вероятная зона расположения или точные координаты) [3, 4]. Ошибка IP-геолокации зависит от страны расположения устройства, плотности населения, типа сетевого устройства и лежит в пределах от нескольких десятков метров до сотен километров. При этом для одних и тех же входных данных результаты разных IP-геосервисов могут различаться значительно. Объектом данного исследования выступают общедоступные IP-геосервисы, предоставляющие услуги по IP-геопривязке узлов глобальной сети на основе их IP-адресов, а именно – их точность и полнота. Выборка IP-геосервисов для тестирования были сформирована из числа наиболее популярных [5]. При проведении исследования результаты IP-геолокации сравнивались с достоверными сведениями о расположении некоторых IP-адресов, в качестве показателей точности использовались страна, город и географические координаты. На основе сравнительного анализа результатов тестирования были сделаны выводы о точности IP-геосервисов по выбранным показателям, их существенных свойствах, а также о зависимости ошибки геолокации от размера населенного пункта. Для повышения точности IP-геопривязки авторами предложен ансамблевый метод усреднения координат, полученных от нескольких IP-геосервисов

    Повышение точности IP-геолокации на основе данных, предоставляемых открытыми IP-геосервисами

    Get PDF
    IP geolocation is the process of determining the real geographic location of an electronic device connected to the Internet, by its global network address [1]. Currently, it has found wide application in Internet commerce, marketing and advertising, information security [2], and other areas of human activity. There are different methods for determining the location of a remote network device, which differ both in type of analyzed information (delay packet transmission, resource records DNS-servers, the content of Web pages), and the result (country or city name, mail address, probable area of location or exact coordinates) [3, 4]. IP geolocating error depends on the country, population density, type of network device and ranges from several tens of meters to hundreds of kilometers. For the same input data, the results of different IP-geoservices can vary significantly. The object of this study is the public IP-geoservices that provide geolocating services for nodes in the global network based on their IP addresses, and specifically, their accuracy and completeness. The sample of IP-geoservices for testing was formed from the most popular ones [5]. During the study, the results of IP-geolocation were compared with reliable information about the location of some IP addresses, as indicators of accuracy country, city and geographic coordinates were used. Based on the comparative analysis of the test results, conclusions about the accuracy of IP-geolocation services according to the selected indicators, their essential properties, as well as the dependence of geolocation error on the size of the settlement were made. To improve the accuracy of IP georeferencing, the authors proposed an ensemble method for averaging coordinates obtained from several IP geoservices.IP-геолокация – это процесс определения реального географического положения электронного устройства, подключенного к сети Интернет, по его глобальному сетевому адресу [1]. В настоящее время она нашла широкое применение в интернет-торговле, маркетинге и рекламе, информационной безопасности [2] и других направлениях человеческой деятельности. Применяются различные подходы к определению местоположения удаленного сетевого устройства, различающиеся как по типу анализируемой информации (задержка передачи пакетов, ресурсные записи DNS-серверов, контент веб-страниц), так и по выдаваемому результату (название страны или города, почтовый адрес, вероятная зона расположения или точные координаты) [3, 4]. Ошибка IP-геолокации зависит от страны расположения устройства, плотности населения, типа сетевого устройства и лежит в пределах от нескольких десятков метров до сотен километров. При этом для одних и тех же входных данных результаты разных IP-геосервисов могут различаться значительно. Объектом данного исследования выступают общедоступные IP-геосервисы, предоставляющие услуги по IP-геопривязке узлов глобальной сети на основе их IP-адресов, а именно – их точность и полнота. Выборка IP-геосервисов для тестирования были сформирована из числа наиболее популярных [5]. При проведении исследования результаты IP-геолокации сравнивались с достоверными сведениями о расположении некоторых IP-адресов, в качестве показателей точности использовались страна, город и географические координаты. На основе сравнительного анализа результатов тестирования были сделаны выводы о точности IP-геосервисов по выбранным показателям, их существенных свойствах, а также о зависимости ошибки геолокации от размера населенного пункта. Для повышения точности IP-геопривязки авторами предложен ансамблевый метод усреднения координат, полученных от нескольких IP-геосервисов

    Longitudinal Study of an IP Geolocation Database

    Full text link
    IP geolocation - the process of mapping network identifiers to physical locations - has myriad applications. We examine a large collection of snapshots from a popular geolocation database and take a first look at its longitudinal properties. We define metrics of IP geo-persistence, prevalence, coverage, and movement, and analyse 10 years of geolocation data at different location granularities. Across different classes of IP addresses, we find that significant location differences can exist even between successive instances of the database - a previously underappreciated source of potential error when using geolocation data: 47% of end users IP addresses move by more than 40 km in 2019. To assess the sensitivity of research results to the instance of the geo database, we reproduce prior research that depended on geolocation lookups. In this case study, which analyses geolocation database performance on routers, we demonstrate impact of these temporal effects: median distance from ground truth shifted from 167 km to 40 km when using a two months apart snapshot. Based on our findings, we make recommendations for best practices when using geolocation databases in order to best encourage reproducibility and sound measurement.Comment: Technical Report related to a paper appeared in Network Traffic Measurement and Analysis Conference (TMA 2021

    VerLoc: Verifiable Localization in Decentralized Systems

    Full text link
    This paper tackles the challenge of reliably determining the geo-location of nodes in decentralized networks, considering adversarial settings and without depending on any trusted landmarks. In particular, we consider active adversaries that control a subset of nodes, announce false locations and strategically manipulate measurements. To address this problem we propose, implement and evaluate VerLoc, a system that allows verifying the claimed geo-locations of network nodes in a fully decentralized manner. VerLoc securely schedules roundtrip time (RTT) measurements between randomly chosen pairs of nodes. Trilateration is then applied to the set of measurements to verify claimed geo-locations. We evaluate VerLoc both with simulations and in the wild using a prototype implementation integrated in the Nym network (currently run by thousands of nodes). We find that VerLoc can localize nodes in the wild with a median error of 60km, and that in attack simulations it is capable of detecting and filtering out adversarial timing manipulations for network setups with up to 20% malicious nodes

    Online housing search: A gravity model approach

    Get PDF
    Hervorming Sociale Regelgevin

    Systems for characterizing Internet routing

    Get PDF
    2018 Spring.Includes bibliographical references.Today the Internet plays a critical role in our lives; we rely on it for communication, business, and more recently, smart home operations. Users expect high performance and availability of the Internet. To meet such high demands, all Internet components including routing must operate at peak efficiency. However, events that hamper the routing system over the Internet are very common, causing millions of dollars of financial loss, traffic exposed to attacks, or even loss of national connectivity. Moreover, there is sparse real-time detection and reporting of such events for the public. A key challenge in addressing such issues is lack of methodology to study, evaluate and characterize Internet connectivity. While many networks operating autonomously have made the Internet robust, the complexity in understanding how users interconnect, interact and retrieve content has also increased. Characterizing how data is routed, measuring dependency on external networks, and fast outage detection has become very necessary using public measurement infrastructures and data sources. From a regulatory standpoint, there is an immediate need for systems to detect and report routing events where a content provider's routing policies may run afoul of state policies. In this dissertation, we design, build and evaluate systems that leverage existing infrastructure and report routing events in near-real time. In particular, we focus on geographic routing anomalies i.e., detours, routing failure i.e., outages, and measuring structural changes in routing policies

    BGP-Multipath Routing in the Internet

    Get PDF
    BGP-Multipath, or BGP-M, is a routing technique for balancing traffic load in the Internet. It enables a Border Gateway Protocol (BGP) border router to install multiple ‘equally-good’ paths to a destination prefix. While other multipath routing techniques are deployed at internal routers, BGP-M is deployed at border routers where traffic is shared on multiple border links between Autonomous Systems (ASes). Although there are a considerable number of research efforts on multipath routing, there is so far no dedicated measurement or study on BGP-M in the literature. This thesis presents the first systematic study on BGP-M. I proposed a novel approach to inferring the deployment of BGP-M by querying Looking Glass (LG) servers. I conducted a detailed investigation on the deployment of BGP-M in the Internet. I also analysed BGP-M’s routing properties based on traceroute measurements using RIPE Atlas probes. My research has revealed that BGP-M has already been used in the Internet. In particular, Hurricane Electric (AS6939), a Tier-1 network operator, has deployed BGP-M at border routers across its global network to hundreds of its neighbour ASes on both IPv4 and IPv6 Internet. My research has provided the state-of-the-art knowledge and insights in the deployment, configuration and operation of BGP-M. The data, methods and analysis introduced in this thesis can be immensely valuable to researchers, network operators and regulators who are interested in improving the performance and security of Internet routing. This work has raised awareness of BGP-M and may promote more deployment of BGP-M in future because BGP-M not only provides all benefits of multipath routing but also has distinct advantages in terms of flexibility, compatibility and transparency

    Leveraging Conventional Internet Routing Protocol Behavior to Defeat DDoS and Adverse Networking Conditions

    Get PDF
    The Internet is a cornerstone of modern society. Yet increasingly devastating attacks against the Internet threaten to undermine the Internet\u27s success at connecting the unconnected. Of all the adversarial campaigns waged against the Internet and the organizations that rely on it, distributed denial of service, or DDoS, tops the list of the most volatile attacks. In recent years, DDoS attacks have been responsible for large swaths of the Internet blacking out, while other attacks have completely overwhelmed key Internet services and websites. Core to the Internet\u27s functionality is the way in which traffic on the Internet gets from one destination to another. The set of rules, or protocol, that defines the way traffic travels the Internet is known as the Border Gateway Protocol, or BGP, the de facto routing protocol on the Internet. Advanced adversaries often target the most used portions of the Internet by flooding the routes benign traffic takes with malicious traffic designed to cause widespread traffic loss to targeted end users and regions. This dissertation focuses on examining the following thesis statement. Rather than seek to redefine the way the Internet works to combat advanced DDoS attacks, we can leverage conventional Internet routing behavior to mitigate modern distributed denial of service attacks. The research in this work breaks down into a single arc with three independent, but connected thrusts, which demonstrate that the aforementioned thesis is possible, practical, and useful. The first thrust demonstrates that this thesis is possible by building and evaluating Nyx, a system that can protect Internet networks from DDoS using BGP, without an Internet redesign and without cooperation from other networks. This work reveals that Nyx is effective in simulation for protecting Internet networks and end users from the impact of devastating DDoS. The second thrust examines the real-world practicality of Nyx, as well as other systems which rely on real-world BGP behavior. Through a comprehensive set of real-world Internet routing experiments, this second thrust confirms that Nyx works effectively in practice beyond simulation as well as revealing novel insights about the effectiveness of other Internet security defensive and offensive systems. We then follow these experiments by re-evaluating Nyx under the real-world routing constraints we discovered. The third thrust explores the usefulness of Nyx for mitigating DDoS against a crucial industry sector, power generation, by exposing the latent vulnerability of the U.S. power grid to DDoS and how a system such as Nyx can protect electric power utilities. This final thrust finds that the current set of exposed U.S. power facilities are widely vulnerable to DDoS that could induce blackouts, and that Nyx can be leveraged to reduce the impact of these targeted DDoS attacks
    corecore