320 research outputs found

    A Kernel-space POF virtual switch

    Get PDF
    Protocol Oblivious Forwarding (POF) aims at providing a standard southbound interface for sustainable Software Defined Networking (SDN) evolvement. It overcomes the limitations of popular Open Flow protocols (an existing widely-adopted southbound interface), through the enhancement of SDN forwarding plane. This paper pioneers the design and implementation of a Kernel-space POF Virtual Switch (K_POFVS) on Linux platform. K_POFVS can improve the packet processing speed, through fast packet forwarding and the capability of adding/deleting/modifying protocol fields in kernel space. In addition, it is able to enhance flow table matching speed, by separating the mask table (consisting of flow entry masks used to figure out the matching field) and the flow table under a caching mechanism. Furthermore, K_POFVS can achieve efficient communication between the kernel space and the user space, via extending the Netlink communication between them. Experimental results show that K_POFVS can provide much better performance than existing user-space POF virtual switches, in terms of packet forwarding delay, packet processing delay and packet transmission rateThis work is partially supported by the National Program on Key Basic Research Project of China (973 Program) under Grant No. 2012CB315803, the Strategic Priority Research Program of the Chinese Academy of Sciences under grant No. XDA06010306, the National Natural Science Foundation of China under Grant No. 61303241, and the University of Exeter’s Innovation Platform – Link Fund under Award No. LF207

    BPFabric: Data Plane Programmability for Software Defined Networks

    Get PDF
    In its current form, OpenFlow, the de facto implementation of SDN, separates the network’s control and data planes allowing a central controller to alter the matchaction pipeline using a limited set of fields and actions. To support new protocols, forwarding logic, telemetry, monitoring or even middlebox-like functions the currently available programmability in SDN is insufficient. In this paper, we introduce BPFabric, a platform, protocol, and language-independent architecture to centrally program and monitor the data plane. BPFabric leverages eBPF, a platform and protocol independent instruction set to define the packet processing and forwarding functionality of the data plane. We introduce a control plane API that allows data plane functions to be deployed onthe-fly, reporting events of interest and exposing network internal state. We present a raw socket and DPDK implementation of the design, the former for large-scale experimentation using environment such as Mininet and the latter for high-performance low-latency deployments. We show through examples that functions unrealisable in OpenFlow can leverage this flexibility while achieving similar or better performance to today’s static design

    Software-based methods for Operating system dependability

    Get PDF
    Guaranteeing correct system behaviour in modern computer systems has become essential, in particular for safety-critical computer-based systems. However all modern systems are susceptible to transient faults that can disrupt the intended operation and function of such systems. In order to evaluate the sensitivity of such systems, different methods have been developed, and among them Fault Injection is considered a valid approach widely adopted. This document presents a fault injection tool, called Kernel-based Fault-Injection Tool Open-source (KITO), to analyze the effects of faults in memory elements containing kernel data structures belonging to a Unix-based Operating System and, in particular, elements involved in resources synchronization. This tool was evaluated in different stages of its development with different experimental analyses by performing Faults Injections in the Operating System, while the system was subject to stress from benchmark programs that use different elements of the Linux kernel. The results showed that KITO was capable of generating faults in different elements of the operating systems with limited intrusiveness, and that the data structures belonging to synchronization aspects of the kernel are susceptible to an appreciable set of possible errors ranging from performance degradation to complete system failure, thus preventing benchmark applications to perform their task. Finally, aiming at overcoming the vulnerabilities discovered with KITO, a couple of solutions have been proposed consisting in the implementation of hardening techniques in the source code of the Linux kernel, such as Triple Modular Redundancy and Error Detection And Correction codes. An experimental fault injection analysis has been conducted to evaluate the effectiveness of the proposed solutions. Results have shown that it is possible to successfully detect and correct the noxious effects generated by single faults in the system with a limited performance overhead in kernel data structures of the Linux kernel

    FLICK: developing and running application-specific network services

    Get PDF
    Data centre networks are increasingly programmable, with application-specific network services proliferating, from custom load-balancers to middleboxes providing caching and aggregation. Developers must currently implement these services using traditional low-level APIs, which neither support natural operations on application data nor provide efficient performance isolation. We describe FLICK, a framework for the programming and execution of application-specific network services on multi-core CPUs. Developers write network services in the FLICK language, which offers high-level processing constructs and application-relevant data types. FLICK programs are translated automatically to efficient, parallel task graphs, implemented in C++ on top of a user-space TCP stack. Task graphs have bounded resource usage at runtime, which means that the graphs of multiple services can execute concurrently without interference using cooperative scheduling. We evaluate FLICK with several services (an HTTP load-balancer, a Memcached router and a Hadoop data aggregator), showing that it achieves good performance while reducing development effort

    SODALITE: SDN wireless backhauling for dense 4G/5G Small Cell networks

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Dense deployments of Small Cells are key to fulfill the capacity requirements of future 5G networks. However, two roadblocks to the adoption of Small Cells are i) the limited availability and the cost of sites with wired backhaul resources, and ii) the complexity to manage a dense deployment of wireless backhaul nodes. Towards these challenges we propose SODALITE, a novel system that applies Software Defined Networking (SDN) to a wireless backhaul network. We present how SODALITE can be integrated to 3GPP’s 4G and 5G architectures, and show the feasibility of SODALITE through LTE network testbed experiments. We substantiate the scalability of SODALITE through stochastic studies using real-life traffic traces from an LTE network and discuss the effects of cell densification and 5G system architecture on these studies. Further, a reliable backhauling solution for wireless links is introduced in SODALITE through SDN-enabled mechanisms that are capable of reconfiguring the data plane upon a link failure detection. Its reliability is shown through experiments on a LTE network testbed, and studied thoroughly via rigorous simulations and network emulator evaluations. As a result, we claim that SODALITE is a promising carrier-grade system to manage a wireless Small Cell backhaul.Postprint (author's final draft

    Evaluation of machine learning techniques for intrusion detection in software defined networking

    Get PDF
    Abstract. The widespread growth of the Internet paved the way for the need of a new network architecture which was filled by Software Defined Networking (SDN). SDN separated the control and data planes to overcome the challenges that came along with the rapid growth and complexity of the network architecture. However, centralizing the new architecture also introduced new security challenges and created the demand for stronger security measures. The focus is on the Intrusion Detection System (IDS) for a Distributed Denial of Service (DDoS) attack which is a serious threat to the network system. There are several ways of detecting an attack and with the rapid growth of machine learning (ML) and artificial intelligence, the study evaluates several ML algorithms for detecting DDoS attacks on the system. Several factors have an effect on the performance of ML based IDS in SDN. Feature selection, training dataset, and implementation of the classifying models are some of the important factors. The balance between usage of resources and the performance of the implemented model is important. The model implemented in the thesis uses a dataset created from the traffic flow within the system and models being used are Support Vector Machine (SVM), Naive-Bayes, Decision Tree and Logistic Regression. The accuracy of the models has been over 95% apart from Logistic Regression which has 90% accuracy. The ML based algorithm has been more accurate than the non-ML based algorithm. It learns from different features of the traffic flow to differentiate between normal traffic and attack traffic. Most of the previously implemented ML based IDS are based on public datasets. Using a dataset created from the flow of the experimental environment allows training of the model from a real-time dataset. However, the experiment only detects the traffic and does not take any action. However, these promising results can be used for further development of the model

    Design and analysis of a 3-dimensional cluster multicomputer architecture using optical interconnection for petaFLOP computing

    Get PDF
    In this dissertation, the design and analyses of an extremely scalable distributed multicomputer architecture, using optical interconnects, that has the potential to deliver in the order of petaFLOP performance is presented in detail. The design takes advantage of optical technologies, harnessing the features inherent in optics, to produce a 3D stack that implements efficiently a large, fully connected system of nodes forming a true 3D architecture. To adopt optics in large-scale multiprocessor cluster systems, efficient routing and scheduling techniques are needed. To this end, novel self-routing strategies for all-optical packet switched networks and on-line scheduling methods that can result in collision free communication and achieve real time operation in high-speed multiprocessor systems are proposed. The system is designed to allow failed/faulty nodes to stay in place without appreciable performance degradation. The approach is to develop a dynamic communication environment that will be able to effectively adapt and evolve with a high density of missing units or nodes. A joint CPU/bandwidth controller that maximizes the resource allocation in this dynamic computing environment is introduced with an objective to optimize the distributed cluster architecture, preventing performance/system degradation in the presence of failed/faulty nodes. A thorough analysis, feasibility study and description of the characteristics of a 3-Dimensional multicomputer system capable of achieving 100 teraFLOP performance is discussed in detail. Included in this dissertation is throughput analysis of the routing schemes, using methods from discrete-time queuing systems and computer simulation results for the different proposed algorithms. A prototype of the 3D architecture proposed is built and a test bed developed to obtain experimental results to further prove the feasibility of the design, validate initial assumptions, algorithms, simulations and the optimized distributed resource allocation scheme. Finally, as a prelude to further research, an efficient data routing strategy for highly scalable distributed mobile multiprocessor networks is introduced

    P4言語を用いたパケット分類アルゴリズムに関する研究

    Get PDF
    パケット・クラシファイアとは、コンピュータネットワークにおいてネットワーク機器に到着したパケットをグループに分類するメカリズムである。特定の処理のためにパケットを区別して分離する必要があるサービス、例えば、ファイアウォールやサービス品質などのカスタマイズネットワークサービスなどを提供するためにルータでのパケットを分類するのは極めて重要である。パケット分類に関するアルゴリズムがいくつかの研究で提案されている。分類の性能を向上するため、決定木、ヒューリスティックなどを利用した提案がある。しかし、その性能評価は主にハードウェア実装に基づいていたので、アルゴリズムの設計方法、データ構造などソフトウェルーターに適用できない恐れがある。近年、ネットワークプロトコル、ターゲット非依存という特徴をあるP4言語が開発された。P4言語は幅広いのデータプレーンをプログラミングできるように、ネットワークの基本機能に関する表現力豊かな文法設計されています。仮想ネットワーク機能(VNF)に対する研究が流行っている背景のなか、P4言語用いてソフトウェアにおけるパケット分類の実装を研究する必要がある。本研究では、今までネットワークのパケット分類に関するアルゴリズムがP4言語文法による実装を検討する。P4抽象転送モデル中で利用可能なプログラミングフローを議論し、パケット分類の改善に適しているデータ構造を示した。また、異なるアルゴリズムとデータ構造を用いて、P4ソースコードからコンパイルされたソフトウェアルーターの性能評価を行った。電気通信大学201
    corecore