1,690 research outputs found

    Dataplane Specialization for High-performance OpenFlow Software Switching

    Get PDF
    OpenFlow is an amazingly expressive dataplane program- ming language, but this expressiveness comes at a severe performance price as switches must do excessive packet clas- sification in the fast path. The prevalent OpenFlow software switch architecture is therefore built on flow caching, but this imposes intricate limitations on the workloads that can be supported efficiently and may even open the door to mali- cious cache overflow attacks. In this paper we argue that in- stead of enforcing the same universal flow cache semantics to all OpenFlow applications and optimize for the common case, a switch should rather automatically specialize its dat- aplane piecemeal with respect to the configured workload. We introduce ES WITCH , a novel switch architecture that uses on-the-fly template-based code generation to compile any OpenFlow pipeline into efficient machine code, which can then be readily used as fast path. We present a proof- of-concept prototype and we demonstrate on illustrative use cases that ES WITCH yields a simpler architecture, superior packet processing speed, improved latency and CPU scala- bility, and predictable performance. Our prototype can eas- ily scale beyond 100 Gbps on a single Intel blade even with complex OpenFlow pipelines

    Baguette:towards end-to-end service orchestration in heterogeneous networks

    Get PDF
    Network services are the key mechanism for operators to introduce intelligence and generate profit from their infrastructures. The growth of the number of network users and the stricter application network requirements have highlighted a number of challenges in orchestrating services using existing production management and configuration protocols and mechanisms. Recent networking paradigms like Software Defined Networking (SDN) and Network Function Virtualization (NFV), provide a set of novel control and management interfaces that enable unprecedented automation, flexibility and openness capabilities in operator infrastructure management. This paper presents Baguette, a novel and open service orchestration framework for operators. Baguette supports a wide range of network technologies, namely optical and wired Ethernet technologies, and allows service providers to automate the deployment and dynamic re-optimization of network services. We present the design of the orchestrator and elaborate on the integration of Baguette with existing low-level network and cloud management frameworks

    Packet Filtering Module For PFQ Packet Capturing Engine.

    Get PDF
    The evolution of commodity hardware is pushing parallelism forward as the key factor that can allow software to attain hardware-class performance while still retaining its advantages. On one side, commodity CPUs are providing more and more cores (the next-generation Intel Xeon E 7500 CPUs will soon make 10 cores processors a commodity product), with a complex cache hierarchy which makes aware data placement crucial to good performance. On the other side, server NIC‘s are adapting to these new trends by increasing themselves their level of parallelism. While traditional 1Gbps NICs exchanged data with the CPU through a single ring of shared memory buffers, modern 10Gbps cards support multiple queues: multiple cores can therefore receive and transmit packets in parallel. In particular, incoming packets can be de-multiplexed across CPUs based on a hash function (the so-called RSS technology) or on the MAC address (the VMD-q technology, designed for servers hosting multiple virtual machines). The Linux kernel has recently begun to support these new technologies. Though there is lot of network monitoring software‘s, most of them have not yet been designed with high parallelism in mind. Therefore a novel packet capturing engine, named PFQ was designed, that allows efficient capturing and in-kernel aggregation, as well as connection-aware load balancing. Such an engine is based on a novel lockless queue and allows parallel packet capturing to let the user-space application arbitrarily define its degree of parallelism. Therefore, both legacy applications and natively parallel ones can benefit from such capturing engine. In addition, PFQ outperforms its competitors both in terms of captured packets and CPU consumption. In this thesis, a new packet filtering block is designed implemented and added to the existing PFQ capture engine which helps in dropping out unnecessary packets before they are copied into the kernel space thus improves the overall performance of the engine considerably. Because network monitors often want only a small subset of network traffic, a dramatic performance gain is realized by filtering out unwanted packets in interrupt context

    Design and Performance of Scalable High-Performance Programmable Routers - Doctoral Dissertation, August 2002

    Get PDF
    The flexibility to adapt to new services and protocols without changes in the underlying hardware is and will increasingly be a key requirement for advanced networks. Introducing a processing component into the data path of routers and implementing packet processing in software provides this ability. In such a programmable router, a powerful processing infrastructure is necessary to achieve to level of performance that is comparable to custom silicon-based routers and to demonstrate the feasibility of this approach. This work aims at the general design of such programmable routers and, specifically, at the design and performance analysis of the processing subsystem. The necessity of programmable routers is motivated, and a router design is proposed. Based on the design, a general performance model is developed and quantitatively evaluated using a new network processor benchmark. Operational challenges, like scheduling of packets to processing engines, are addressed, and novel algorithms are presented. The results of this work give qualitative and quantitative insights into this new domain that combines issues from networking, computer architecture, and system design

    GPU Accelerated protocol analysis for large and long-term traffic traces

    Get PDF
    This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark

    Real Time Packet Classification and Analysis based on Bloom Filter for Longest Prefix Matching

    Get PDF
    Packet classification is an enabling function in network and security systems; hence, hardware-based solutions, such as TCAM (Ternary Content Addressable Memory), have been extensively adopted for high-performance systems. With the expeditious improvement of hardware architectures and burgeoning popularity of multi-core multi-threaded processors, decision-tree based packet classification algorithms such as HiCuts and HyperCuts are grabbing considerable attention, outstanding to their flexibility in satisfying miscellaneous industrial requirements for network and security systems. For high classification speed, these algorithms internally use decision trees, whose size increases exponentially with the ruleset size; consequently, they cannot be used with a large rulesets. However, these decision tree algorithms involve complicated heuristics for concluding the number of cuts and fields. Moreover, ?xed interval-based cutting not depicting the actual space that each rule covers is defeasible and terminates in a huge storage requirement. We propose a new packet classification that simultaneously supports high scalability and fast classification performance by using Bloom Filter. Bloom uses hash table as a data structure which is an efficient data structure for membership queries to avoid lookup in some subsets which contain no matching rules and to sustain high throughput by using Longest Prefix Matching (LPM) algorithm. Hash table data structure which improves the performance by providing better boundaries on the hash collisions and memory accesses per search. The proposed classification algorithm also shows good scalability, high classification speed, irrespective of the number of rules. Performance analysis results show that the proposed algorithm enables network and security systems to support heavy traffic in the most effective manner

    Intelligent Management and Efficient Operation of Big Data

    Get PDF
    This chapter details how Big Data can be used and implemented in networking and computing infrastructures. Specifically, it addresses three main aspects: the timely extraction of relevant knowledge from heterogeneous, and very often unstructured large data sources, the enhancement on the performance of processing and networking (cloud) infrastructures that are the most important foundational pillars of Big Data applications or services, and novel ways to efficiently manage network infrastructures with high-level composed policies for supporting the transmission of large amounts of data with distinct requisites (video vs. non-video). A case study involving an intelligent management solution to route data traffic with diverse requirements in a wide area Internet Exchange Point is presented, discussed in the context of Big Data, and evaluated.Comment: In book Handbook of Research on Trends and Future Directions in Big Data and Web Intelligence, IGI Global, 201

    Stability of secure routing protocol in ad hoc wireless network.

    Get PDF
    The contributions of this research are threefold. First, it offers a new routing approach to ad hoc wireless network protocols: the Enhanced Heading-direction Angle Routing Protocol (EHARP), which is an enhancement of HARP based on an on-demand routing scheme. We have added important features to overcome its disadvantages and improve its performance, providing the stability and availability required to guarantee the selection of the best path. Each node in the network is able to classify its neighbouring nodes according to their heading directions into four different zone-direction group. The second contribution is to present a new Secure Enhanced Heading-direction Angle Routing Protocol (SEHARP) for ad hoc networks based on the integration of security mechanisms that could be applied to the EHARP routing protocol. Thirdly, we present a new approach to security of access in hostile environments based on the history and relationships among the nodes and on digital operation certificates. We also propose an access activity diagram which explains the steps taken by a node. Security depends on access to the history of each unit, which is used to calculate the cooperative values of each node in the environment
    corecore