21 research outputs found

    Generating of DNS Service Attacks

    Get PDF
    Dostupnost a bezpečnost patří mezi nejdůležitější požadavky internetových služeb. Proto je nezbytné detekovat síťové anomálie, které mají na tyto potřeby největší vliv. Aby detekční mechanismy držely krok se stále propracovanějšími anomáliemi, musí být na jejich vývoj kladen velký důraz. Cílem této práce je analýza a replikování nejčastějších útoků na službu DNS. Sesbíraná data z těchto generátorů útoků mohou byt použitá pro lepší pochopení jejich chování, co vede ke tvorbě kvalitnějších detekčních mechanizmů.Availability and security are the most important requirements for Internet services. It is therefore necessary to detect the network anomalies, which have the greatest impact on these requirements. Therefore the great emphasis must be placed on development of detection  mechanisms that keep pace with increasingly sophisticated network anomalies. The aim of this work is to analyze and replicate the most common DNS service attacks. Collected data from these attack generators can be used for better understanding of their behavior, what leads to improved and more effective detection mechanisms.

    K-resolver: Towards Decentralizing Encrypted DNS Resolution

    Full text link
    Centralized DNS over HTTPS/TLS (DoH/DoT) resolution, which has started being deployed by major hosting providers and web browsers, has sparked controversy among Internet activists and privacy advocates due to several privacy concerns. This design decision causes the trace of all DNS resolutions to be exposed to a third-party resolver, different than the one specified by the user's access network. In this work we propose K-resolver, a DNS resolution mechanism that disperses DNS queries across multiple DoH resolvers, reducing the amount of information about a user's browsing activity exposed to each individual resolver. As a result, none of the resolvers can learn a user's entire web browsing history. We have implemented a prototype of our approach for Mozilla Firefox, and used it to evaluate the performance of web page load time compared to the default centralized DoH approach. While our K-resolver mechanism has some effect on DNS resolution time and web page load time, we show that this is mainly due to the geographical location of the selected DoH servers. When more well-provisioned anycast servers are available, our approach incurs negligible overhead while improving user privacy.Comment: NDSS Workshop on Measurements, Attacks, and Defenses for the Web (MADWeb) 202

    The Impact of DNSSEC on the Internet Landscape

    Get PDF
    In this dissertation we investigate the security deficiencies of the Domain Name System (DNS) and assess the impact of the DNSSEC security extensions. DNS spoofing attacks divert an application to the wrong server, but are also used routinely for blocking access to websites. We provide evidence for systematic DNS spoofing in China and Iran with measurement-based analyses, which allow us to examine the DNS spoofing filters from vantage points outside of the affected networks. Third-parties in other countries can be affected inadvertently by spoofing-based domain filtering, which could be averted with DNSSEC. The security goals of DNSSEC are data integrity and authenticity. A point solution called NSEC3 adds a privacy assertion to DNSSEC, which is supposed to prevent disclosure of the domain namespace as a whole. We present GPU-based attacks on the NSEC3 privacy assertion, which allow efficient recovery of the namespace contents. We demonstrate with active measurements that DNSSEC has found wide adoption after initial hesitation. At server-side, there are more than five million domains signed with DNSSEC. A portion of them is insecure due to insufficient cryptographic key lengths or broken due to maintenance failures. At client-side, we have observed a worldwide increase of DNSSEC validation over the last three years, though not necessarily on the last mile. Deployment of DNSSEC validation on end hosts is impaired by intermediate caching components, which degrade the availability of DNSSEC. However, intermediate caches contribute to the performance and scalability of the Domain Name System, as we show with trace-driven simulations. We suggest that validating end hosts utilize intermediate caches by default but fall back to autonomous name resolution in case of DNSSEC failures.In dieser Dissertation werden die Sicherheitsdefizite des Domain Name Systems (DNS) untersucht und die Auswirkungen der DNSSEC-Sicherheitserweiterungen bewertet. DNS-Spoofing hat den Zweck eine Anwendung zum falschen Server umzuleiten, wird aber auch regelmäßig eingesetzt, um den Zugang zu Websites zu sperren. Durch messbasierte Analysen wird in dieser Arbeit die systematische Durchführung von DNS-Spoofing-Angriffen in China und im Iran belegt, wobei sich die Messpunkte außerhalb der von den Sperrfiltern betroffenen Netzwerke befinden. Es wird gezeigt, dass Dritte in anderen Ländern durch die Spoofing-basierten Sperrfilter unbeabsichtigt beeinträchtigt werden können, was mit DNSSEC verhindert werden kann. Die Sicherheitsziele von DNSSEC sind Datenintegrität und Authentizität. Die NSEC3-Erweiterung sichert zudem die Privatheit des Domainnamensraums, damit die Inhalte eines DNSSEC-Servers nicht in Gänze ausgelesen werden können. In dieser Arbeit werden GPU-basierte Angriffsmethoden auf die von NSEC3 zugesicherte Privatheit vorgestellt, die eine effiziente Wiederherstellung des Domainnamensraums ermöglichen. Ferner wird mit aktiven Messmethoden die Verbreitung von DNSSEC untersucht, die nach anfänglicher Zurückhaltung deutlich zugenommen hat. Auf der Serverseite gibt es mehr als fünf Millionen mit DNSSEC signierte Domainnamen. Ein Teil davon ist aufgrund von unzureichenden kryptographischen Schlüssellängen unsicher, ein weiterer Teil zudem aufgrund von Wartungsfehlern nicht mit DNSSEC erreichbar. Auf der Clientseite ist der Anteil der DNSSEC-Validierung in den letzten drei Jahren weltweit gestiegen. Allerdings ist hierbei offen, ob die Validierung nahe bei den Endgeräten stattfindet, um unvertraute Kommunikationspfade vollständig abzusichern. Der Einsatz von DNSSEC-Validierung auf Endgeräten wird durch zwischengeschaltete DNS-Cache-Komponenten erschwert, da hierdurch die Verfügbarkeit von DNSSEC beeinträchtigt wird. Allerdings tragen zwischengeschaltete Caches zur Performance und Skalierbarkeit des Domain Name Systems bei, wie in dieser Arbeit mit messbasierten Simulationen gezeigt wird. Daher sollten Endgeräte standardmäßig die vorhandene DNS-Infrastruktur nutzen, bei Validierungsfehlern jedoch selbständig die DNSSEC-Zielserver anfragen, um im Cache gespeicherte, fehlerhafte DNS-Antworten zu umgehen

    Simulated penetration testing and mitigation analysis

    Get PDF
    Da Unternehmensnetzwerke und Internetdienste stetig komplexer werden, wird es immer schwieriger, installierte Programme, Schwachstellen und Sicherheitsprotokolle zu überblicken. Die Idee hinter simuliertem Penetrationstesten ist es, Informationen über ein Netzwerk in ein formales Modell zu transferiern und darin einen Angreifer zu simulieren. Diesem Modell fügen wir einen Verteidiger hinzu, der mittels eigener Aktionen versucht, die Fähigkeiten des Angreifers zu minimieren. Dieses zwei-Spieler Handlungsplanungsproblem nennen wir Stackelberg planning. Ziel ist es, Administratoren, Penetrationstestern und der Führungsebene dabei zu helfen, die Schwachstellen großer Netzwerke zu identifizieren und kosteneffiziente Gegenmaßnahmen vorzuschlagen. Wir schaffen in dieser Dissertation erstens die formalen und algorithmischen Grundlagen von Stackelberg planning. Indem wir dabei auf klassischen Planungsproblemen aufbauen, können wir von gut erforschten Heuristiken und anderen Techniken zur Analysebeschleunigung, z.B. symbolischer Suche, profitieren. Zweitens entwerfen wir einen Formalismus für Privilegien-Eskalation und demonstrieren die Anwendbarkeit unserer Simulation auf lokale Computernetzwerke. Drittens wenden wir unsere Simulation auf internetweite Szenarien an und untersuchen die Robustheit sowohl der E-Mail-Infrastruktur als auch von Webseiten. Viertens ermöglichen wir mittels webbasierter Benutzeroberflächen den leichten Zugang zu unseren Tools und Analyseergebnissen.As corporate networks and Internet services are becoming increasingly more complex, it is hard to keep an overview over all deployed software, their potential vulnerabilities, and all existing security protocols. Simulated penetration testing was proposed to extend regular penetration testing by transferring gathered information about a network into a formal model and simulate an attacker in this model. Having a formal model of a network enables us to add a defender trying to mitigate the capabilities of the attacker with their own actions. We name this two-player planning task Stackelberg planning. The goal behind this is to help administrators, penetration testing consultants, and the management level at finding weak spots of large computer infrastructure and suggesting cost-effective mitigations to lower the security risk. In this thesis, we first lay the formal and algorithmic foundations for Stackelberg planning tasks. By building it in a classical planning framework, we can benefit from well-studied heuristics, pruning techniques, and other approaches to speed up the search, for example symbolic search. Second, we design a theory for privilege escalation and demonstrate the applicability of our framework to local computer networks. Third, we apply our framework to Internet-wide scenarios by investigating the robustness of both the email infrastructure and the web. Fourth, we make our findings and our toolchain easily accessible via web-based user interfaces

    Unfair Competition Issues of Big Data in China

    Get PDF
    The sound development of the market in the data-driven economy depends on the free and fair competition of big data in the industries. Since 2015, more and more unfair competition cases concerning big data have occurred in China, such as masking advertisement, click fraud, malicious incompatibility, and gathering user’s personal data from competitors by unfair means, which can be categorized to unfair competition about illegal collection/use of competitors’ big data and about network traffic. Whether China’s current legal system of anti-unfair competition can resolve the above-mentioned disputes is concerned in this article. As the Paris Convention only regulates the basic principles of “fairness” and “honest practice” for anti-unfair competition, member states have room to develop their own legal systems according to their special economic, social and cultural conditions. In order to usher in the era of digital economy and big data and to regulate more and more unfair competition events, China amended the Anti-Unfair Competitive Law in 2017 in which a new provision for regulating the operation of e-commerce was added. This article finds that the 2017 Amendment, which is far more specific and clearer than the Paris Convention, has significantly improved China’s ability to deal with unfair competition behaviors regarding big data. However, since the patterns of unfair competition in big data are changing and “innovating” quickly and constantly, law amendments will hardly or even never catch up with the changes, so judgement of unfair competition is inherently difficult. The court cannot determine that a company constitutes unfair competition simply because its business operations have substantially reduced the performance or operating effectiveness of its competitors. When judging whether an enterprise’s competitive behavior constitutes unfair competition, no matter the court is applying one of the specific provisions or the general provision, it is essential to consider whether the enterprise has malicious and dishonest practices

    Using Context to Improve Network-based Exploit Kit Detection

    Get PDF
    Today, our computers are routinely compromised while performing seemingly innocuous activities like reading articles on trusted websites (e.g., the NY Times). These compromises are perpetrated via complex interactions involving the advertising networks that monetize these sites. Web-based compromises such as exploit kits are similar to any other scam -- the attacker wants to lure an unsuspecting client into a trap to steal private information, or resources -- generating 10s of millions of dollars annually. Exploit kits are web-based services specifically designed to capitalize on vulnerabilities in unsuspecting client computers in order to install malware without a user's knowledge. Sadly, it only takes a single successful infection to ruin a user's financial life, or lead to corporate breaches that result in millions of dollars of expense and loss of customer trust. Exploit kits use a myriad of techniques to obfuscate each attack instance, making current network-based defenses such as signature-based network intrusion detection systems far less effective than in years past. Dynamic analysis or honeyclient analysis on these exploits plays a key role in identifying new attacks for signature generation, but provides no means of inspecting end-user traffic on the network to identify attacks in real time. As a result, defenses designed to stop such malfeasance often arrive too late or not at all resulting in high false positive and false negative (error) rates. In order to deal with these drawbacks, three new detection approaches are presented. To deal with the issue of a high number of errors, a new technique for detecting exploit kit interactions on a network is proposed. The technique capitalizes on the fact that an exploit kit leads its potential victim through a process of exploitation by forcing the browser to download multiple web resources from malicious servers. This process has an inherent structure that can be captured in HTTP traffic and used to significantly reduce error rates. The approach organizes HTTP traffic into tree-like data structures, and, using a scalable index of exploit kit traces as samples, models the detection process as a subtree similarity search problem. The technique is evaluated on 3,800 hours of web traffic on a large enterprise network, and results show that it reduces false positive rates by four orders of magnitude over current state-of-the-art approaches. While utilizing structure can vastly improve detection rates over current approaches, it does not go far enough in helping defenders detect new, previously unseen attacks. As a result, a new framework that applies dynamic honeyclient analysis directly on network traffic at scale is proposed. The framework captures and stores a configurable window of reassembled HTTP objects network wide, uses lightweight content rendering to establish the chain of requests leading up to a suspicious event, then serves the initial response content back to the honeyclient in an isolated network. The framework is evaluated on a diverse collection of exploit kits as they evolve over a 1 year period. The empirical evaluation suggests that the approach offers significant operational value, and a single honeyclient can support a campus deployment of thousands of users. While the above approaches attempt to detect exploit kits before they have a chance to infect the client, they cannot protect a client that has already been infected. The final technique detects signs of post infection behavior by intrusions that abuses the domain name system (DNS) to make contact with an attacker. Contemporary detection approaches utilize the structure of a domain name and require hundreds of DNS messages to detect such malware. As a result, these detection mechanisms cannot detect malware in a timely manner and are susceptible to high error rates. The final technique, based on sequential hypothesis testing, uses the DNS message patterns of a subset of DNS traffic to detect malware in as little as four DNS messages, and with orders of magnitude reduction in error rates. The results of this work can make a significant operational impact on network security analysis, and open several exciting future directions for network security research.Doctor of Philosoph

    SymbexNet: Checking Network Protocol Implementations using Symbolic Execution

    No full text
    The implementations of network protocols, such as DNS, DHCP and Zeroconf, are prone to flaws, security vulnerabilities and interoperability issues caused by ambiguous requirements in protocol specifications. Detecting such problems is not easy because (i) many bugs manifest themselves only after prolonged operation; (ii) the state space of complex protocol implementations is large; and (iii) problems often require additional information about correct behaviour from specifications. This thesis presents a novel approach to detect various types of flaws in network protocol implementations by combining symbolic execution and rule-based packet matching. The core idea behind our approach is to generate automatically high-coverage test input packets for a network protocol implementation. For this, the protocol implementation is run using a symbolic execution engine to obtain test input packets. These packets are then used to detect potential violations of rules that constrain permitted input and output packets and were derived from the protocol specification. We propose a technique that repeatedly performs symbolic execution on selected test input packets to achieve broad and deep exploration of the implementation state space. In addition, we use the generated test packets to check interoperability between different implementations of the same network protocol. We present a system based on these techniques, SYMBEXNET, and show that it can automatically generate test input packets that achieve high source code coverage and discover various bugs. We evaluate SYMBEXNET on multiple implementations of two network protocols: Zeroconf, a service discovery protocol, and DHCP, a network configuration protocol. SYMBEXNET is able to discover non-trivial bugs as well as interoperability problems, most of which have been confirmed by the developers

    Large-Scale Networks: Algorithms, Complexity and Real Applications

    Get PDF
    Networks have broad applicability to real-world systems, due to their ability to model and represent complex relationships. The discovery and forecasting of insightful patterns from networks are at the core of analytical intelligence in government, industry, and science. Discoveries and forecasts, especially from large-scale networks commonly available in the big-data era, strongly rely on fast and efficient network algorithms. Algorithms for dealing with large-scale networks are the first topic of research we focus on in this thesis. We design, theoretically analyze and implement efficient algorithms and parallel algorithms, rigorously proving their worst-case time and space complexities. Our main contributions in this area are novel, parallel algorithms to detect k-clique communities, special network groups which are widely used to understand complex phenomena. The proposed algorithms have a space complexity which is the square root of that of the current state-of-the-art. Time complexity achieved is optimal, since it is inversely proportional to the number of processing units available. Extensive experiments were conducted to confirm the efficiency of the proposed algorithms, even in comparison to the state-of-the-art. We experimentally measured a linear speedup, substantiating the optimal performances attained. The second focus of this thesis is the application of networks to discover insights from real-world systems. We introduce novel methodologies to capture cross correlations in evolving networks. We instantiate these methodologies to study the Internet, one of the most, if not the most, pervasive modern technological system. We investigate the dynamics of connectivity among Internet companies, those which interconnect to ensure global Internet access. We then combine connectivity dynamics with historical worldwide stock markets data, and produce graphical representations to visually identify high correlations. We find that geographically close Internet companies offering similar services are driven by common economic factors. We also provide evidence on the existence and nature of hidden factors governing the dynamics of Internet connectivity. Finally, we propose network models to effectively study the Internet Domain Name System (DNS) traffic, and leverage these models to obtain rankings of Internet domains as well as to identify malicious activities
    corecore