95 research outputs found

    The Abandoned Side of the Internet: Hijacking Internet Resources When Domain Names Expire

    Full text link
    The vulnerability of the Internet has been demonstrated by prominent IP prefix hijacking events. Major outages such as the China Telecom incident in 2010 stimulate speculations about malicious intentions behind such anomalies. Surprisingly, almost all discussions in the current literature assume that hijacking incidents are enabled by the lack of security mechanisms in the inter-domain routing protocol BGP. In this paper, we discuss an attacker model that accounts for the hijacking of network ownership information stored in Regional Internet Registry (RIR) databases. We show that such threats emerge from abandoned Internet resources (e.g., IP address blocks, AS numbers). When DNS names expire, attackers gain the opportunity to take resource ownership by re-registering domain names that are referenced by corresponding RIR database objects. We argue that this kind of attack is more attractive than conventional hijacking, since the attacker can act in full anonymity on behalf of a victim. Despite corresponding incidents have been observed in the past, current detection techniques are not qualified to deal with these attacks. We show that they are feasible with very little effort, and analyze the risk potential of abandoned Internet resources for the European service region: our findings reveal that currently 73 /24 IP prefixes and 7 ASes are vulnerable to be stealthily abused. We discuss countermeasures and outline research directions towards preventive solutions.Comment: Final version for TMA 201

    Off-Path TCP Exploits of the Mixed IPID Assignment

    Full text link
    In this paper, we uncover a new off-path TCP hijacking attack that can be used to terminate victim TCP connections or inject forged data into victim TCP connections by manipulating the new mixed IPID assignment method, which is widely used in Linux kernel version 4.18 and beyond to help defend against TCP hijacking attacks. The attack has three steps. First, an off-path attacker can downgrade the IPID assignment for TCP packets from the more secure per-socket-based policy to the less secure hash-based policy, building a shared IPID counter that forms a side channel on the victim. Second, the attacker detects the presence of TCP connections by observing the shared IPID counter on the victim. Third, the attacker infers the sequence number and the acknowledgment number of the detected connection by observing the side channel of the shared IPID counter. Consequently, the attacker can completely hijack the connection, i.e., resetting the connection or poisoning the data stream. We evaluate the impacts of this off-path TCP attack in the real world. Our case studies of SSH DoS, manipulating web traffic, and poisoning BGP routing tables show its threat on a wide range of applications. Our experimental results show that our off-path TCP attack can be constructed within 215 seconds and the success rate is over 88%. Finally, we analyze the root cause of the exploit and develop a new IPID assignment method to defeat this attack. We prototype our defense in Linux 4.18 and confirm its effectiveness through extensive evaluation over real applications on the Internet

    SoK: A Data-driven View on Methods to Detect Reflective Amplification DDoS Attacks Using Honeypots

    Full text link
    In this paper, we revisit the use of honeypots for detecting reflective amplification attacks. These measurement tools require careful design of both data collection and data analysis including cautious threshold inference. We survey common amplification honeypot platforms as well as the underlying methods to infer attack detection thresholds and to extract knowledge from the data. By systematically exploring the threshold space, we find most honeypot platforms produce comparable results despite their different configurations. Moreover, by applying data from a large-scale honeypot deployment, network telescopes, and a real-world baseline obtained from a leading DDoS mitigation provider, we question the fundamental assumption of honeypot research that convergence of observations can imply their completeness. Conclusively we derive guidance on precise, reproducible honeypot research, and present open challenges.Comment: camera-read

    Addressless: A New Internet Server Model to Prevent Network Scanning

    Full text link
    Eliminating unnecessary exposure is a principle of server security. The huge IPv6 address space enhances security by making scanning infeasible, however, with recent advances of IPv6 scanning technologies, network scanning is again threatening server security. In this paper, we propose a new model named addressless server, which separates the server into an entrance module and a main service module, and assigns an IPv6 prefix instead of an IPv6 address to the main service module. The entrance module generates a legitimate IPv6 address under this prefix by encrypting the client address, so that the client can access the main server on a destination address that is different in each connection. In this way, the model provides isolation to the main server, prevents network scanning, and minimizes exposure. Moreover it provides a novel framework that supports flexible load balancing, high-availability, and other desirable features. The model is simple and does not require any modification to the client or the network. We implement a prototype and experiments show that our model can prevent the main server from being scanned at a slight performance cost

    BGP based Solution for International ISP Blocking

    Get PDF

    BGP based Solution for International ISP Blocking

    Get PDF

    Characterizing the IoT ecosystem at scale

    Get PDF
    Internet of Things (IoT) devices are extremely popular with home, business, and industrial users. To provide their services, they typically rely on a backend server in- frastructure on the Internet, which collectively form the IoT Ecosystem. This ecosys- tem is rapidly growing and offers users an increasing number of services. It also has been a source and target of significant security and privacy risks. One notable exam- ple is the recent large-scale coordinated global attacks, like Mirai, which disrupted large service providers. Thus, characterizing this ecosystem yields insights that help end-users, network operators, policymakers, and researchers better understand it, obtain a detailed view, and keep track of its evolution. In addition, they can use these insights to inform their decision-making process for mitigating this ecosystem’s security and privacy risks. In this dissertation, we characterize the IoT ecosystem at scale by (i) detecting the IoT devices in the wild, (ii) conducting a case study to measure how deployed IoT devices can affect users’ privacy, and (iii) detecting and measuring the IoT backend infrastructure. To conduct our studies, we collaborated with a large European Internet Service Provider (ISP) and a major European Internet eXchange Point (IXP). They rou- tinely collect large volumes of passive, sampled data, e.g., NetFlow and IPFIX, for their operational purposes. These data sources help providers obtain insights about their networks, and we used them to characterize the IoT ecosystem at scale. We start with IoT devices and study how to track and trace their activity in the wild. We developed and evaluated a scalable methodology to accurately detect and monitor IoT devices with limited, sparsely sampled data in the ISP and IXP. Next, we conduct a case study to measure how a myriad of deployed devices can affect the privacy of ISP subscribers. Unfortunately, we found that the privacy of a substantial fraction of IPv6 end-users is at risk. We noticed that a single device at home that encodes its MAC address into the IPv6 address could be utilized as a tracking identifier for the entire end-user prefix—even if other devices use IPv6 privacy extensions. Our results showed that IoT devices contribute the most to this privacy leakage. Finally, we focus on the backend server infrastructure and propose a methodology to identify and locate IoT backend servers operated by cloud services and IoT vendors. We analyzed their IoT traffic patterns as observed in the ISP. Our analysis sheds light on their diverse operational and deployment strategies. The need for issuing a priori unknown network-wide queries against large volumes of network flow capture data, which we used in our studies, motivated us to develop Flowyager. It is a system built on top of existing traffic capture utilities, and it relies on flow summarization techniques to reduce (i) the storage and transfer cost of flow captures and (ii) query response time. We deployed a prototype of Flowyager at both the IXP and ISP.Internet-of-Things-GerĂ€te (IoT) sind aus vielen Haushalten, BĂŒrorĂ€umen und In- dustrieanlagen nicht mehr wegzudenken. Um ihre Dienste zu erbringen, nutzen IoT- GerĂ€te typischerweise auf eine Backend-Server-Infrastruktur im Internet, welche als Gesamtheit das IoT-Ökosystem bildet. Dieses Ökosystem wĂ€chst rapide an und bie- tet den Nutzern immer mehr Dienste an. Das IoT-Ökosystem ist jedoch sowohl eine Quelle als auch ein Ziel von signifikanten Risiken fĂŒr die Sicherheit und PrivatsphĂ€re. Ein bemerkenswertes Beispiel sind die jĂŒngsten groß angelegten, koordinierten globa- len Angriffe wie Mirai, durch die große Diensteanbieter gestört haben. Deshalb ist es wichtig, dieses Ökosystem zu charakterisieren, eine ganzheitliche Sicht zu bekommen und die Entwicklung zu verfolgen, damit Forscher, EntscheidungstrĂ€ger, Endnutzer und Netzwerkbetreibern Einblicke und ein besseres VerstĂ€ndnis erlangen. Außerdem können alle Teilnehmer des Ökosystems diese Erkenntnisse nutzen, um ihre Entschei- dungsprozesse zur Verhinderung von Sicherheits- und PrivatsphĂ€rerisiken zu verbes- sern. In dieser Dissertation charakterisieren wir die Gesamtheit des IoT-Ökosystems indem wir (i) IoT-GerĂ€te im Internet detektieren, (ii) eine Fallstudie zum Einfluss von benutzten IoT-GerĂ€ten auf die PrivatsphĂ€re von Nutzern durchfĂŒhren und (iii) die IoT-Backend-Infrastruktur aufdecken und vermessen. Um unsere Studien durchzufĂŒhren, arbeiten wir mit einem großen europĂ€ischen Internet- Service-Provider (ISP) und einem großen europĂ€ischen Internet-Exchange-Point (IXP) zusammen. Diese sammeln routinemĂ€ĂŸig fĂŒr operative Zwecke große Mengen an pas- siven gesampelten Daten (z.B. als NetFlow oder IPFIX). Diese Datenquellen helfen Netzwerkbetreibern Einblicke in ihre Netzwerke zu erlangen und wir verwendeten sie, um das IoT-Ökosystem ganzheitlich zu charakterisieren. Wir beginnen unsere Analysen mit IoT-GerĂ€ten und untersuchen, wie diese im Inter- net aufgespĂŒrt und verfolgt werden können. Dazu entwickelten und evaluierten wir eine skalierbare Methodik, um IoT-GerĂ€te mit Hilfe von eingeschrĂ€nkten gesampelten Daten des ISPs und IXPs prĂ€zise erkennen und beobachten können. Als NĂ€chstes fĂŒhren wir eine Fallstudie durch, in der wir messen, wie eine Unzahl von eingesetzten GerĂ€ten die PrivatsphĂ€re von ISP-Nutzern beeinflussen kann. Lei- der fanden wir heraus, dass die PrivatsphĂ€re eines substantiellen Teils von IPv6- Endnutzern bedroht ist. Wir entdeckten, dass bereits ein einzelnes GerĂ€t im Haus, welches seine MAC-Adresse in die IPv6-Adresse kodiert, als Tracking-Identifikator fĂŒr das gesamte Endnutzer-PrĂ€fix missbraucht werden kann — auch wenn andere GerĂ€te IPv6-Privacy-Extensions verwenden. Unsere Ergebnisse zeigten, dass IoT-GerĂ€te den Großteil dieses PrivatsphĂ€re-Verlusts verursachen. Abschließend fokussieren wir uns auf die Backend-Server-Infrastruktur und wir schla- gen eine Methodik zur Identifizierung und Lokalisierung von IoT-Backend-Servern vor, welche von Cloud-Diensten und IoT-Herstellern betrieben wird. Wir analysier- ten Muster im IoT-Verkehr, der vom ISP beobachtet wird. Unsere Analyse gibt Auf- schluss ĂŒber die unterschiedlichen Strategien, wie IoT-Backend-Server betrieben und eingesetzt werden. Die Notwendigkeit a-priori unbekannte netzwerkweite Anfragen an große Mengen von Netzwerk-Flow-Daten zu stellen, welche wir in in unseren Studien verwenden, moti- vierte uns zur Entwicklung von Flowyager. Dies ist ein auf bestehenden Netzwerkverkehrs- Tools aufbauendes System und es stĂŒtzt sich auf die Zusammenfassung von Verkehrs- flĂŒssen, um (i) die Kosten fĂŒr Archivierung und Transfer von Flow-Daten und (ii) die Antwortzeit von Anfragen zu reduzieren. Wir setzten einen Prototypen von Flowyager sowohl im IXP als auch im ISP ein

    Exploiting Host Availability in Distributed Systems.

    Full text link
    As distributed systems become more decentralized, fluctuating host availability is an increasingly disruptive phenomenon. Older systems such as AFS used a small number of well-maintained, highly available machines to coordinate access to shared client state; server uptime (and thus service availability) were expected to be high. Newer services scale to larger number of clients by increasing the number of servers. In these systems, the responsibility for maintaining the service abstraction is spread amongst thousands of machines. In the extreme, each client is also a server who must respond to requests from its peers, and each host can opt in or out of the system at any time. In these operating environments, a non-trivial fraction of servers will be unavailable at any give time. This diffusion of responsibility from a few dedicated hosts to many unreliable ones has a dramatic impact on distributed system design, since it is difficult to build robust applications atop a partially available, potentially untrusted substrate. This dissertation explores one aspect of this challenge: how can a distributed system measure the fluctuating availability of its constituent hosts, and how can it use an understanding of this churn to improve performance and security? This dissertation extends the previous literature in three ways. First, it introduces new analytical techniques for characterizing availability data, applying these techniques to several real networks and explaining the distinct uptime patterns found within. Second, this dissertation introduces new methods for predicting future availability, both at the granularity of individual hosts and clusters of hosts. Third, my dissertation describes how to use these new techniques to improve the performance and security of distributed systems.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58445/1/jmickens_1.pd
    • 

    corecore