26 research outputs found

    Using honeypots to trace back amplification DDoS attacks

    Get PDF
    In today’s interconnected world, Denial-of-Service attacks can cause great harm by simply rendering a target system or service inaccessible. Amongst the most powerful and widespread DoS attacks are amplification attacks, in which thousands of vulnerable servers are tricked into reflecting and amplifying attack traffic. However, as these attacks inherently rely on IP spoofing, the true attack source is hidden. Consequently, going after the offenders behind these attacks has so far been deemed impractical. This thesis presents a line of work that enables practical attack traceback supported by honeypot reflectors. To this end, we investigate the tradeoffs between applicability, required a priori knowledge, and traceback granularity in three settings. First, we show how spoofed attack packets and non-spoofed scan packets can be linked using honeypot-induced fingerprints, which allows attributing attacks launched from the same infrastructures as scans. Second, we present a classifier-based approach to trace back attacks launched from booter services after collecting ground-truth data through self-attacks. Third, we propose to use BGP poisoning to locate the attacking network without prior knowledge and even when attack and scan infrastructures are disjoint. Finally, as all of our approaches rely on honeypot reflectors, we introduce an automated end-to-end pipeline to systematically find amplification vulnerabilities and synthesize corresponding honeypots.In der heutigen vernetzten Welt können Denial-of-Service-Angriffe große Schäden verursachen, einfach indem sie ihr Zielsystem unerreichbar machen. Zu den stärksten und verbreitetsten DoS-Angriffen zählen Amplification-Angriffe, bei denen tausende verwundbarer Server missbraucht werden, um Angriffsverkehr zu reflektieren und zu verstärken. Da solche Angriffe jedoch zwingend gefälschte IP-Absenderadressen nutzen, ist die wahre Angriffsquelle verdeckt. Damit gilt die Verfolgung der Täter bislang als unpraktikabel. Diese Dissertation präsentiert eine Reihe von Arbeiten, die praktikable Angriffsrückverfolgung durch den Einsatz von Honeypots ermöglicht. Dazu untersuchen wir das Spannungsfeld zwischen Anwendbarkeit, benötigtem Vorwissen, und Rückverfolgungsgranularität in drei Szenarien. Zuerst zeigen wir, wie gefälschte Angriffs- und ungefälschte Scan-Datenpakete miteinander verknüpft werden können. Dies ermöglicht uns die Rückverfolgung von Angriffen, die ebenfalls von Scan-Infrastrukturen aus durchgeführt wurden. Zweitens präsentieren wir einen Klassifikator-basierten Ansatz um Angriffe durch Booter-Services mittels vorher durch Selbstangriffe gesammelter Daten zurückzuverfolgen. Drittens zeigen wir auf, wie BGP Poisoning genutzt werden kann, um ohne weiteres Vorwissen das angreifende Netzwerk zu ermitteln. Schließlich präsentieren wir einen automatisierten Prozess, um systematisch Schwachstellen zu finden und entsprechende Honeypots zu synthetisieren

    A privacy preserving framework for cyber-physical systems and its integration in real world applications

    Get PDF
    A cyber-physical system (CPS) comprises of a network of processing and communication capable sensors and actuators that are pervasively embedded in the physical world. These intelligent computing elements achieve the tight combination and coordination between the logic processing and physical resources. It is envisioned that CPS will have great economic and societal impact, and alter the qualify of life like what Internet has done. This dissertation focuses on the privacy issues in current and future CPS applications. as thousands of the intelligent devices are deeply embedded in human societies, the system operations may potentially disclose the sensitive information if no privacy preserving mechanism is designed. This dissertation identifies data privacy and location privacy as the representatives to investigate the privacy problems in CPS. The data content privacy infringement occurs if the adversary can determine or partially determine the meaning of the transmitted data or the data stored in the storage. The location privacy, on the other hand, is the secrecy that a certain sensed object is associated to a specific location, the disclosure of which may endanger the sensed object. The location privacy may be compromised by the adversary through hop-by-hop traceback along the reverse direction of the message routing path. This dissertation proposes a public key based access control scheme to protect the data content privacy. Recent advances in efficient public key schemes, such as ECC, have already shown the feasibility to use public key schemes on low power devices including sensor motes. In this dissertation, an efficient public key security primitives, WM-ECC, has been implemented for TelosB and MICAz, the two major hardware platform in current sensor networks. WM-ECC achieves the best performance among the academic implementations. Based on WM-ECC, this dissertation has designed various security schemes, including pairwise key establishment, user access control and false data filtering mechanism, to protect the data content privacy. The experiments presented in this dissertation have shown that the proposed schemes are practical for real world applications. to protect the location privacy, this dissertation has considered two adversary models. For the first model in which an adversary has limited radio detection capability, the privacy-aware routing schemes are designed to slow down the adversary\u27s traceback progress. Through theoretical analysis, this dissertation shows how to maximize the adversary\u27s traceback time given a power consumption budget for message routing. Based on the theoretical results, this dissertation also proposes a simple and practical weighted random stride (WRS) routing scheme. The second model assumes a more powerful adversary that is able to monitor all radio communications in the network. This dissertation proposes a random schedule scheme in which each node transmits at a certain time slot in a period so that the adversary would not be able to profile the difference in communication patterns among all the nodes. Finally, this dissertation integrates the proposed privacy preserving framework into Snoogle, a sensor nodes based search engine for the physical world. Snoogle allows people to search for the physical objects in their vicinity. The previously proposed privacy preserving schemes are applied in the application to achieve the flexible and resilient privacy preserving capabilities. In addition to security and privacy, Snoogle also incorporates a number of energy saving and communication compression techniques that are carefully designed for systems composed of low-cost, low-power embedded devices. The evaluation study comprises of the real world experiments on a prototype Snoogle system and the scalability simulations

    ENTERPRISE SECURITY ANALYSIS INCLUDING DENIAL OF SERVICE COUNTERMEASURES

    Get PDF
    Computer networks are the nerve systems of modern enterprises. Unfortunately, these networks are subject to numerous attacks. Safeguarding these systems is challenging. In this thesis we describe current threats to enterprise security, before concentrating on the Distributed denial of Service (DDoS) problem. DDoS attacks on popular websites like Amazon, Yahoo, CNN, eBay, Buy, and the recent acts of war using DDoS attacks against NATO ally Estonia [1] graphically illustrate the seriousness of these attacks. Denial of Service (DoS) attacks are explicit attempts to block legitimate users\u27 system access by reducing system availability [2]. A DDoS attack deploys multiple attacking entities to attain this goal [3]. Unfortunately, DDoS attacks are difficult to prevent and the solutions proposed to date are insufficient. This thesis uses combinatorial game theory to analyze the dynamics of DDoS attacks on an enterprise and find traffic adaptations that counter the attack. This work builds on the DDoS analysis in [4]. The approach we present designs networks with a structure that either resists DDoS attacks, or adapts around them. The attacker (Red) launches a DDoS on the distributed application (Blue). Both Red and Blue play an abstract board game defined on a capacitated graph, where nodes have limited CPU capacities and edges have bandwidth constraints. Our technique provides two important results that aid in designing DDoS resistant systems: 1.It quantifies the resources an attacker needs to disable a distributed application. The design alternative that maximizes this value will be the least vulnerable to DDoS attacks. 2.When the attacker does not have enough resources to satisfy the limit in 1, we provide near optimal strategies for reconfiguring the distributed application in response to attempted DDoS attacks. Our analysis starts by finding the feasible network configurations for Blue that satisfy its computation and communications requirements. The min-cut sets [5] of these configurations are the locations most vulnerable to packet flooding DDoS attacks. Red places \u27zombie\u27 processes on the graph that consume network bandwidth. Red attempts to break Blue communications links. Blue reconfigures its network to re-establish communications. We analyze this board game using the theory of surreal numbers [6]. If Blue can make the game \u27loopy\u27 (i.e. move to one of its previous configurations), it wins [7]. If Red creates a situation where Blue can not successfully reconfigure the network, it wins. In practice, each enterprise relies on multiple distributed processes. Similarly, an attacker can not expect to destroy all of the processes used by the enterprise at any point in time. The attacker will try to maximize the number of processes it can disable at any point in time. This situation describes a \u27sum of games\u27 problem [6], where Blue and Red alternate moves. We adapt Berlekamp\u27s strategies for Go endgames, to tractably find near optimal reconfiguration regimes for this P-Space complete problem [6], [7]

    TRACEABILITY IN THE U.S. FOOD SUPPLY: ECONOMIC THEORY AND INDUSTRY STUDIES

    Get PDF
    This investigation into the traceability baseline in the United States finds that private sector food firms have developed a substantial capacity to trace. Traceability systems are a tool to help firms manage the flow of inputs and products to improve efficiency, product differentiation, food safety, and product quality. Firms balance the private costs and benefits of traceability to determine the efficient level of traceability. In cases of market failure, where the private sector supply of traceability is not socially optimal, the private sector has developed a number of mechanisms to correct the problem, including contracting, third-party safety/quality audits, and industry-maintained standards. The best-targeted government policies for strengthening firms' incentives to invest in traceability are aimed at ensuring that unsafe of falsely advertised foods are quickly removed from the system, while allowing firms the flexibility to determine the manner. Possible policy tools include timed recall standards, increased penalties for distribution of unsafe foods, and increased foodborne-illness surveillance.traceability, tracking, traceback, tracing, recall, supply-side management, food safety, product differentiation, Food Consumption/Nutrition/Food Safety, Industrial Organization,

    Statistical learning in network architecture

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 167-[177]).The Internet has become a ubiquitous substrate for communication in all parts of society. However, many original assumptions underlying its design are changing. Amid problems of scale, complexity, trust and security, the modern Internet accommodates increasingly critical services. Operators face a security arms race while balancing policy constraints, network demands and commercial relationships. This thesis espouses learning to embrace the Internet's inherent complexity, address diverse problems and provide a component of the network's continued evolution. Malicious nodes, cooperative competition and lack of instrumentation on the Internet imply an environment with partial information. Learning is thus an attractive and principled means to ensure generality and reconcile noisy, missing or conflicting data. We use learning to capitalize on under-utilized information and infer behavior more reliably, and on faster time-scales, than humans with only local perspective. Yet the intrinsic dynamic and distributed nature of networks presents interesting challenges to learning. In pursuit of viable solutions to several real-world Internet performance and security problems, we apply statistical learning methods as well as develop new, network-specific algorithms as a step toward overcoming these challenges. Throughout, we reconcile including intelligence at different points in the network with the end-to-end arguments. We first consider learning as an end-node optimization for efficient peer-to-peer overlay neighbor selection and agent-centric latency prediction. We then turn to security and use learning to exploit fundamental weaknesses in malicious traffic streams. Our method is both adaptable and not easily subvertible. Next, we show that certain security and optimization problems require collaboration, global scope and broad views.(cont.) We employ ensembles of weak classifiers within the network core to mitigate IP source address forgery attacks, thereby removing incentive and coordination issues surrounding existing practice. Finally, we argue for learning within the routing plane as a means to directly optimize and balance provider and user objectives. This thesis thus serves first to validate the potential for using learning methods to address several distinct problems on the Internet and second to illuminate design principles in building such intelligent systems in network architecture.by Robert Edward Beverly, IV.Ph.D

    Dynamic Protocol Reverse Engineering a Grammatical Inference Approach

    Get PDF
    Round trip engineering of software from source code and reverse engineering of software from binary files have both been extensively studied and the state-of-practice have documented tools and techniques. Forward engineering of protocols has also been extensively studied and there are firmly established techniques for generating correct protocols. While observation of protocol behavior for performance testing has been studied and techniques established, reverse engineering of protocol control flow from observations of protocol behavior has not received the same level of attention. State-of-practice in reverse engineering the control flow of computer network protocols is comprised of mostly ad hoc approaches. We examine state-of-practice tools and techniques used in three open source projects: Pidgin, Samba, and rdesktop . We examine techniques proposed by computational learning researchers for grammatical inference. We propose to extend the state-of-art by inferring protocol control flow using grammatical inference inspired techniques to reverse engineer automata representations from captured data flows. We present evidence that grammatical inference is applicable to the problem domain under consideration

    Re-feedback: freedom with accountability for causing congestion in a connectionless internetwork

    Get PDF
    This dissertation concerns adding resource accountability to a simplex internetwork such as the Internet, with only necessary but sufficient constraint on freedom. That is, both freedom for applications to evolve new innovative behaviours while still responding responsibly to congestion; and freedom for network providers to structure their pricing in any way, including flat pricing. The big idea on which the research is built is a novel feedback arrangement termed ‘re-feedback’. A general form is defined, as well as a specific proposal (re-ECN) to alter the Internet protocol so that self-contained datagrams carry a metric of expected downstream congestion. Congestion is chosen because of its central economic role as the marginal cost of network usage. The aim is to ensure Internet resource allocation can be controlled either by local policies or by market selection (or indeed local lack of any control). The current Internet architecture is designed to only reveal path congestion to end-points, not networks. The collective actions of self-interested consumers and providers should drive Internet resource allocations towards maximisation of total social welfare. But without visibility of a cost-metric, network operators are violating the architecture to improve their customer’s experience. The resulting fight against the architecture is destroying the Internet’s simplicity and ability to evolve. Although accountability with freedom is the goal, the focus is the congestion metric, and whether an incentive system is possible that assures its integrity as it is passed between parties around the system, despite proposed attacks motivated by self-interest and malice. This dissertation defines the protocol and canonical examples of accountability mechanisms. Designs are all derived from carefully motivated principles. The resulting system is evaluated by analysis and simulation against the constraints and principles originally set. The mechanisms are proven to be agnostic to specific transport behaviours, but they could not be made flow-ID-oblivious

    The use of computational intelligence for security in named data networking

    Get PDF
    Information-Centric Networking (ICN) has recently been considered as a promising paradigm for the next-generation Internet, shifting from the sender-driven end-to-end communication paradigma to a receiver-driven content retrieval paradigm. In ICN, content -rather than hosts, like in IP-based design- plays the central role in the communications. This change from host-centric to content-centric has several significant advantages such as network load reduction, low dissemination latency, scalability, etc. One of the main design requirements for the ICN architectures -since the beginning of their design- has been strong security. Named Data Networking (NDN) (also referred to as Content-Centric Networking (CCN) or Data-Centric Networking (DCN)) is one of these architectures that are the focus of an ongoing research effort that aims to become the way Internet will operate in the future. Existing research into security of NDN is at an early stage and many designs are still incomplete. To make NDN a fully working system at Internet scale, there are still many missing pieces to be filled in. In this dissertation, we study the four most important security issues in NDN in order to defense against new forms of -potentially unknown- attacks, ensure privacy, achieve high availability, and block malicious network traffics belonging to attackers or at least limit their effectiveness, i.e., anomaly detection, DoS/DDoS attacks, congestion control, and cache pollution attacks. In order to protect NDN infrastructure, we need flexible, adaptable and robust defense systems which can make intelligent -and real-time- decisions to enable network entities to behave in an adaptive and intelligent manner. In this context, the characteristics of Computational Intelligence (CI) methods such as adaption, fault tolerance, high computational speed and error resilient against noisy information, make them suitable to be applied to the problem of NDN security, which can highlight promising new research directions. Hence, we suggest new hybrid CI-based methods to make NDN a more reliable and viable architecture for the future Internet.Information-Centric Networking (ICN) ha sido recientemente considerado como un paradigma prometedor parala nueva generación de Internet, pasando del paradigma de la comunicación de extremo a extremo impulsada por el emisora un paradigma de obtención de contenidos impulsada por el receptor. En ICN, el contenido (más que los nodos, como sucede en redes IPactuales) juega el papel central en las comunicaciones. Este cambio de "host-centric" a "content-centric" tiene varias ventajas importantes como la reducción de la carga de red, la baja latencia, escalabilidad, etc. Uno de los principales requisitos de diseño para las arquitecturas ICN (ya desde el principiode su diseño) ha sido una fuerte seguridad. Named Data Networking (NDN) (también conocida como Content-Centric Networking (CCN) o Data-Centric Networking (DCN)) es una de estas arquitecturas que son objetode investigación y que tiene como objetivo convertirse en la forma en que Internet funcionará en el futuro. Laseguridad de NDN está aún en una etapa inicial. Para hacer NDN un sistema totalmente funcional a escala de Internet, todavía hay muchas piezas que faltan por diseñar. Enesta tesis, estudiamos los cuatro problemas de seguridad más importantes de NDN, para defendersecontra nuevas formas de ataques (incluyendo los potencialmente desconocidos), asegurar la privacidad, lograr una alta disponibilidad, y bloquear los tráficos de red maliciosos o al menos limitar su eficacia. Estos cuatro problemas son: detección de anomalías, ataques DoS / DDoS, control de congestión y ataques de contaminación caché. Para solventar tales problemas necesitamos sistemas de defensa flexibles, adaptables y robustos que puedantomar decisiones inteligentes en tiempo real para permitir a las entidades de red que se comporten de manera rápida e inteligente. Es por ello que utilizamos Inteligencia Computacional (IC), ya que sus características (la adaptación, la tolerancia a fallos, alta velocidad de cálculo y funcionamiento adecuado con información con altos niveles de ruido), la hace adecuada para ser aplicada al problema de la seguridad ND

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
    corecore