16 research outputs found

    Finding and Analyzing Evil Cities on the Internet

    Get PDF
    IP Geolocation is used to determine the geographical location of Internet users based on their IP addresses. When it comes to security, most of the traditional geolocation analysis is performed at country level. Since countries usually have many cities/towns of different sizes, it is expected that they behave differently when performing malicious activities. Therefore, in this paper we refine geolocation analysis to the city level. The idea is to find the most dangerous cities on the Internet and observe how they behave. This information can then be used by security analysts to improve their methods and tools. To perform this analysis, we have obtained and evaluated data from a real-world honeypot network of 125 hosts and from production e-mail servers

    Evaluating Third-Party Bad Neighborhood Blacklists for Spam Detection

    Get PDF
    The distribution of malicious hosts over the IP address space is far from being uniform. In fact, malicious hosts tend to be concentrate in certain portions of the IP address space, forming the so-called Bad Neighborhoods. This phenomenon has been previously exploited to filter Spam by means of Bad Neighborhood blacklists. In this paper, we evaluate how much a network administrator can rely upon different Bad Neighborhood blacklists generated by third-party sources to fight Spam. One could expect that Bad Neighborhood blacklists generated from different sources contain, to a varying degree, disjoint sets of entries. Therefore, we investigate (i) how specific a blacklist is to its source, and (ii) whether different blacklists can be interchangeably used to protect a target from Spam. We analyze five Bad Neighborhood blacklists generated from real-world measurements and study their effectiveness in protecting three production mail servers from Spam. Our findings lead to several operational considerations on how a network administrator could best benefit from Bad Neighborhood-based Spam filtering

    The concept of embedded values and the example of internet security

    Get PDF
    Many current technological devices used in our everyday lives confront us with a host of new ethical issues to be addressed. Facebook, Twitter, or smart phones are all examples of technologies used quite pervasively which call into question culturally significant values like privacy, among others. The embedded values concept presents the compelling idea that engineers, scientists and designers can create technologies which intentionally enhance cultural and societal values while at the same time minimizing threats to other values. Although the embedded values concept (and the resulting design theories that follow) is of great utility, it remains unclear how to utilize this concept in practice. Added to this is the difficulty of utilizing this concept when engaged in fundamental research or experiments rather than in the creation of a commercial product. This paper presents a novel approach for collaboration between an ethicist and a computer engineering PhD researcher working on the Internet Bad Neighborhoods concept for spam filtering. The results proved beneficial in terms of both the utility of the embedded values concept as well as a strengthening of the engineering PhD researcher’s work

    Deep Dive into NTP Pool's Popularity and Mapping

    Get PDF
    Time synchronization is of paramount importance on the Internet, with the Network Time Protocol (NTP) serving as the primary synchronization protocol. The NTP Pool, a volunteer-driven initiative launched two decades ago, facilitates connections between clients and NTP servers. Our analysis of root DNS queries reveals that the NTP Pool has consistently been the most popular time service. We further investigate the DNS component (GeoDNS) of the NTP Pool, which is responsible for mapping clients to servers. Our findings indicate that the current algorithm is heavily skewed, leading to the emergence of time monopolies for entire countries. For instance, clients in the US are served by 551 NTP servers, while clients in Cameroon and Nigeria are served by only one and two servers, respectively, out of the 4k+ servers available in the NTP Pool. We examine the underlying assumption behind GeoDNS for these mappings and discover that time servers located far away can still provide accurate clock time information to clients. We have shared our findings with the NTP Pool operators, who acknowledge them and plan to revise their algorithm to enhance security.</p

    When Parents and Children Disagree:Diving into DNS Delegation Inconsistency

    Get PDF
    The Domain Name System (DNS) is a hierarchical, decentralized, and distributed database. A key mechanism that enables the DNS to be hierarchical and distributed is delegation [7] of responsibility from parent to child zones—typically managed by different entities. RFC1034 [12] states that authoritative nameserver (NS) records at both parent and child should be “consistent and remain so”, but we find inconsistencies for over 13M second-level domains. We classify the type of inconsistencies we observe, and the behavior of resolvers in the face of such inconsistencies, using RIPE Atlas to probe our experimental domain configured for different scenarios. Our results underline the risk such inconsistencies pose to the availability of misconfigured domains

    Taking on internet bad neighborhoods

    Get PDF
    It’s known fact that malicious IP addresses are not evenly distributed over the IP addressing space. In this paper, we frame networks concentrating malicious addresses as bad neighborhoods. We propose a formal definition and show this concentration can be used to predict future attacks (new spamming sources, in our case), and propose an algorithm to aggregate individual IP addresses can bigger neighborhoods. Moreover, we show how bad neighborhoods are specific according to the exploited application (e.g., spam, ssh) and how the performance of different blacklist sources impacts lightweight spam filtering algorithms

    Fragmentation, Truncation, and Timeouts:Are Large DNS Messages Falling to Bits?

    No full text
    The DNS provides one of the core services of the Internet, mapping applications and services to hosts. DNS employs both UDP and TCP as a transport protocol, and currently most DNS queries are sent over UDP. The problem with UDP is that large responses run the risk of not arriving at their destinations – which can ultimately lead to unreachability. However, it remains unclear how much of a problem these large DNS responses over UDP are in the wild. This is the focus on this paper: we analyze 164 billion queries/response pairs from more than 46k autonomous systems, covering three months (July 2019 and 2020, and Oct. 2020), collected at the authoritative servers of the.nl, the country-code top-level domain of the Netherlands. We show that fragmentation, and the problems that can follow fragmentation, rarely occur at such authoritative servers. Further, we demonstrate that DNS built-in defenses – use of truncation, EDNS0 buffer sizes, reduced responses and TCP fall back – are effective to reduce fragmentation. Last, we measure the uptake of the DNS flag day in 2020.</p

    Optical Switching Impact on TCP Throughput Limited by TCP Buffers

    Get PDF
    In this paper, we observe the performance of TCP throughput when self-management is employed to automatically move flows from the IP level to established connections at the optical level. This move can result in many packets arriving out of order at the receiver and even being discarded, since some of these packets would be transferred more quickly over an optical connection than the other packets transferred over an IP path. To the best of our knowledge, so far there is no work in the literature that evaluates the TCP throughput when flows undergo such conditions. Within this context, this paper presents an analysis of the impact of optical switching on the TCP CUBIC throughput when the throughput itself is limited by the TCP buffer sizes

    Clouding up the Internet: How centralized is DNS traffic becoming?

    Get PDF
    Concern has been mounting about Internet centralization over the few last years - consolidation of traffic/users/infrastructure into the hands of a few market players. We measure DNS and computing centralization by analyzing DNS traffic collected at a DNS root server and two country-code top-level domains (ccTLDs) - one in Europe and the other in Oceania - and show evidence of concentration. More than 30% of all queries to both ccTLDs are sent from 5 large cloud providers. We compare the clouds resolver infrastructure and highlight a discrepancy in behavior: some cloud providers heavily employ IPv6, DNSSEC, and DNS over TCP, while others simply use unsecured DNS over UDP over IPv4. We show one positive side to centralization: once a cloud provider deploys a security feature - such as QNAME minimization - it quickly benefits a large number of users
    corecore