899 research outputs found

    Know Your Enemy: Stealth Configuration-Information Gathering in SDN

    Full text link
    Software Defined Networking (SDN) is a network architecture that aims at providing high flexibility through the separation of the network logic from the forwarding functions. The industry has already widely adopted SDN and researchers thoroughly analyzed its vulnerabilities, proposing solutions to improve its security. However, we believe important security aspects of SDN are still left uninvestigated. In this paper, we raise the concern of the possibility for an attacker to obtain knowledge about an SDN network. In particular, we introduce a novel attack, named Know Your Enemy (KYE), by means of which an attacker can gather vital information about the configuration of the network. This information ranges from the configuration of security tools, such as attack detection thresholds for network scanning, to general network policies like QoS and network virtualization. Additionally, we show that an attacker can perform a KYE attack in a stealthy fashion, i.e., without the risk of being detected. We underline that the vulnerability exploited by the KYE attack is proper of SDN and is not present in legacy networks. To address the KYE attack, we also propose an active defense countermeasure based on network flows obfuscation, which considerably increases the complexity for a successful attack. Our solution offers provable security guarantees that can be tailored to the needs of the specific network under consideratio

    Load Balancing a Cluster of Web Servers using Distributed Packet Rewriting

    Full text link
    In this paper, we propose and evaluate an implementation of a prototype scalable web server. The prototype consists of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using the Round Robin DNS technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host---namely, the host with the lowest number of established connections. We use the low-overhead Distributed Packet Rewriting (DPR) technique to redirect TCP connections. In our prototype, each host keeps information about connections in hash tables and linked lists. Every time a packet arrives, it is examined to see if it has to be redirected or not. Load information is maintained using periodic broadcasts amongst the cluster hosts.National Science Foundation (CCR-9706685); Microsof

    EPI framework: Approach for traffic redirection through containerised network functions

    Get PDF

    HTTP 1.2: DISTRIBUTED HTTP FOR LOAD BALANCING SERVER SYSTEMS

    Get PDF
    Content hosted on the Internet must appear robust and reliable to clients relying on such content. As more clients come to rely on content from a source, that source can be subjected to high levels of load. There are a number of solutions, collectively called load balancers, which try to solve the load problem through various means. All of these solutions are workarounds for dealing with problems inherent in the medium by which content is served thereby limiting their effectiveness. HTTP, or Hypertext Transport Protocol, is the dominant mechanism behind hosting content on the Internet through websites. The entirety of the Internet has changed drastically over its history, with the invention of new protocols, distribution methods, and technological improvements. However, HTTP has undergone only three versions since its inception in 1991, and all three versions serve content as a text stream that cannot be interrupted to allow for load balancing decisions. We propose a solution that takes existing portions of HTTP, augments them, and includes some new features in order to increase usability and management of serving content over the Internet by allowing redirection of content in-stream. This in-stream redirection introduces a new step into the client-server connection where servers can make decisions while continuing to serve content to the client. Load balancing methods can then use the new version of HTTP to make better decisions when applied to multi-server systems making load balancing more robust, with more control over the client-server interaction

    EGOIST: Overlay Routing Using Selfish Neighbor Selection

    Full text link
    A foundational issue underlying many overlay network applications ranging from routing to P2P file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a prototype overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using measurements on PlanetLab and trace-based simulations, we demonstrate that Egoist's neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we discuss some of the potential benefits Egoist may offer to applications.National Science Foundation (CISE/CSR 0720604, ENG/EFRI 0735974, CISE/CNS 0524477, CNS/NeTS 0520166, CNS/ITR 0205294; CISE/EIA RI 0202067; CAREER 04446522); European Commission (RIDS-011923

    HORNET: High-speed Onion Routing at the Network Layer

    Get PDF
    We present HORNET, a system that enables high-speed end-to-end anonymous channels by leveraging next generation network architectures. HORNET is designed as a low-latency onion routing system that operates at the network layer thus enabling a wide range of applications. Our system uses only symmetric cryptography for data forwarding yet requires no per-flow state on intermediate nodes. This design enables HORNET nodes to process anonymous traffic at over 93 Gb/s. HORNET can also scale as required, adding minimal processing overhead per additional anonymous channel. We discuss design and implementation details, as well as a performance and security evaluation.Comment: 14 pages, 5 figure

    Object as a Service (OaaS): Enabling Object Abstraction in Serverless Clouds

    Full text link
    Function as a Service (FaaS) paradigm is becoming widespread and is envisioned as the next generation of cloud systems that mitigate the burden for programmers and cloud solution architects. However, the FaaS abstraction only makes the cloud resource management aspects transparent but does not deal with the application data aspects. As such, developers have to undergo the burden of managing the application data, often via separate cloud services (e.g., AWS S3). Similarly, the FaaS abstraction does not natively support function workflow, hence, the developers often have to work with workflow orchestration services (e.g., AWS Step Functions) to build workflows. Moreover, they have to explicitly navigate the data throughout the workflow. To overcome these problems of FaaS, we design a higher-level cloud programming abstraction that hides the complexities and mitigate the burden of developing cloud-native application development. We borrow the notion of object from object-oriented programming and propose a new abstraction level atop the function abstraction, known as Object as a Service (OaaS). OaaS encapsulates the application data and function into the object abstraction and relieves the developers from resource and data management burdens. It also unlocks opportunities for built-in optimization features, such as software reusability, data locality, and caching. OaaS natively supports dataflow programming such that developers define a workflow of functions transparently without getting involved in data navigation, synchronization, and parallelism aspects. We implemented a prototype of the OaaS platform and evaluated it under real-world settings against state-of-the-art platforms regarding the imposed overhead, scalability, and ease of use. The results demonstrate that OaaS streamlines cloud programming and offers scalability with an insignificant overhead to the underlying cloud system.Comment: This version of the paper has been significantly altered and the new observations have been obtained. Therefore, we withdraw the paper until the new version becomes availabl
    • …
    corecore