11 research outputs found

    On the Exploration of FPGAs and High-Level Synthesis Capabilities on Multi-Gigabit-per-Second Networks

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: 24-01-2020Traffic on computer networks has faced an exponential grown in recent years. Both links and communication equipment had to adapt in order to provide a minimum quality of service required for current needs. However, in recent years, a few factors have prevented commercial off-the-shelf hardware from being able to keep pace with this growth rate, consequently, some software tools are struggling to fulfill their tasks, especially at speeds higher than 10 Gbit/s. For this reason, Field Programmable Gate Arrays (FPGAs) have arisen as an alternative to address the most demanding tasks without the need to design an application specific integrated circuit, this is in part to their flexibility and programmability in the field. Needless to say, developing for FPGAs is well-known to be complex. Therefore, in this thesis we tackle the use of FPGAs and High-Level Synthesis (HLS) languages in the context of computer networks. We focus on the use of FPGA both in computer network monitoring application and reliable data transmission at very high-speed. On the other hand, we intend to shed light on the use of high level synthesis languages and boost FPGA applicability in the context of computer networks so as to reduce development time and design complexity. In the first part of the thesis, devoted to computer network monitoring. We take advantage of the FPGA determinism in order to implement active monitoring probes, which consist on sending a train of packets which is later used to obtain network parameters. In this case, the determinism is key to reduce the uncertainty of the measurements. The results of our experiments show that the FPGA implementations are much more accurate and more precise than the software counterpart. At the same time, the FPGA implementation is scalable in terms of network speed — 1, 10 and 100 Gbit/s. In the context of passive monitoring, we leverage the FPGA architecture to implement algorithms able to thin cyphered traffic as well as removing duplicate packets. These two algorithms straightforward in principle, but very useful to help traditional network analysis tools to cope with their task at higher network speeds. On one hand, processing cyphered traffic bring little benefits, on the other hand, processing duplicate traffic impacts negatively in the performance of the software tools. In the second part of the thesis, devoted to the TCP/IP stack. We explore the current limitations of reliable data transmission using standard software at very high-speed. Nowadays, the network is becoming an important bottleneck to fulfill current needs, in particular in data centers. What is more, in recent years the deployment of 100 Gbit/s network links has started. Consequently, there has been an increase scrutiny of how networking functionality is deployed, furthermore, a wide range of approaches are currently being explored to increase the efficiency of networks and tailor its functionality to the actual needs of the application at hand. FPGAs arise as the perfect alternative to deal with this problem. For this reason, in this thesis we develop Limago an FPGA-based open-source implementation of a TCP/IP stack operating at 100 Gbit/s for Xilinx’s FPGAs. Limago not only provides an unprecedented throughput, but also, provides a tiny latency when compared to the software implementations, at least fifteen times. Limago is a key contribution in some of the hottest topic at the moment, for instance, network-attached FPGA and in-network data processing

    A Quality of Service framework for upstream traffic in LTE across an XG-PON backhaul

    Get PDF
    Passive Optical Networks (PON) are promising as a transport network technology due to the high network capacity, long reach and strong QoS support in the latest PON standards. Long Term Evolution (LTE) is a popular wireless technology for its large data rates in the last mile. The natural integration of LTE and XG-PON, which is one of the latest standards of PON, presents several challenges for XG-PON to satisfy the backhaul QoS requirements of aggregated upstream LTE applications. This thesis proves that a dedicated XG-PON-based backhaul is capable of ensuring the QoS treatment required by different upstream application types in LTE, by means of standard-compliant Dynamic Bandwidth Allocation (DBA) mechanisms. First the design and evaluation of a standard-compliant, robust and fast XG-PON simulation module developed for the state-of-the-art ns-3 network simulator is presented in the thesis. This XG-PON simulation module forms a trustworthy and large-scale simulation platform for the evaluations in the rest of the thesis, and has been released for use by the scientific community. The design and implementation details of the XGIANT DBA, which provides standard complaint QoS treatment in an isolated XG-PON network, are then presented in the thesis along with comparative evaluations with the recently-published EBU DBA. The evaluations explored the ability of both XGIANT and EBU in terms of queuing-delay and throughput assurances for different classes of simplified (deterministic) traffic models, for a range of upstream loading in XG-PON. The evaluation of XGIANT and EBU DBAs are then presented for the context of a dedicated XG-PON backhaul in LTE with regard to the influence of standard-compliant and QoS-aware DBAs on the performance of large-scale, UDP-based applications. These evaluations disqualify both XGIANT and EBU DBAs in providing prioritised queuing delay performances for three upstream application types (conversational voice, peer-to-peer video and best-effort Internet) in LTE; the evaluations also indicate the need to have more dynamic and efficient QoS policies, along with an improved fairness policy in a DBA used in the dedicated XG-PON backhaul to ensure the QoS requirements of the upstream LTE applications in the backhaul. Finally, the design and implementation details of two standard-compliant DBAs, namely Deficit XGIANT (XGIANT-D) and Proportional XGIANT (XGIANT-P), which provide the required QoS treatment in the dedicated XG-PON backhaul for all three application types in the LTE upstream are presented in the thesis. Evaluations of the XGIANT-D and XGIANT-P DBAs presented in the thesis prove the ability of the fine-tuned QoS and fairness policies in the DBAs in ensuring prioritised and fair queuing-delay and throughput efficiency for UDP- and TCP-based applications, generated and aggregated based on realistic conditions in the LTE upstream

    Hardware acceleration for power efficient deep packet inspection

    Get PDF
    The rapid growth of the Internet leads to a massive spread of malicious attacks like viruses and malwares, making the safety of online activity a major concern. The use of Network Intrusion Detection Systems (NIDS) is an effective method to safeguard the Internet. One key procedure in NIDS is Deep Packet Inspection (DPI). DPI can examine the contents of a packet and take actions on the packets based on predefined rules. In this thesis, DPI is mainly discussed in the context of security applications. However, DPI can also be used for bandwidth management and network surveillance. DPI inspects the whole packet payload, and due to this and the complexity of the inspection rules, DPI algorithms consume significant amounts of resources including time, memory and energy. The aim of this thesis is to design hardware accelerated methods for memory and energy efficient high-speed DPI. The patterns in packet payloads, especially complex patterns, can be efficiently represented by regular expressions, which can be translated by the use of Deterministic Finite Automata (DFA). DFA algorithms are fast but consume very large amounts of memory with certain kinds of regular expressions. In this thesis, memory efficient algorithms are proposed based on the transition compressions of the DFAs. In this work, Bloom filters are used to implement DPI on an FPGA for hardware acceleration with the design of a parallel architecture. Furthermore, devoted at a balance of power and performance, an energy efficient adaptive Bloom filter is designed with the capability of adjusting the number of active hash functions according to current workload. In addition, a method is given for implementation on both two-stage and multi-stage platforms. Nevertheless, false positive rates still prevents the Bloom filter from extensive utilization; a cache-based counting Bloom filter is presented in this work to get rid of the false positives for fast and precise matching. Finally, in future work, in order to estimate the effect of power savings, models will be built for routers and DPI, which will also analyze the latency impact of dynamic frequency adaption to current traffic. Besides, a low power DPI system will be designed with a single or multiple DPI engines. Results and evaluation of the low power DPI model and system will be produced in future

    Secure Cloud Storage

    Get PDF
    The rapid growth of Cloud based services on the Internet invited many critical security attacks. Consumers and corporations who use the Cloud to store their data encounter a difficult trade-off of accepting and bearing the security, reliability, and privacy risks as well as costs in order to reap the benefits of Cloud storage. The primary goal of this thesis is to resolve this trade-off while minimizing total costs. This thesis presents a system framework that solves this problem by using erasure codes to add redundancy and security to users’ data, and by optimally choosing Cloud storage providers to minimize risks and total storage costs. Detailed comparative analysis of the security and algorithmic properties of 7 different erasure codes is presented, showing codes with better data security comes with a higher cost in computational time complexity. The codes which granted the highest configuration flexibility bested their peers, as the flexibility directly corresponded to the level of customizability for data security and storage costs. In-depth analysis of the risks, benefits, and costs of Cloud storage is presented, and analyzed to provide cost-based and security-based optimal selection criteria for choosing appropriate Cloud storage providers. A brief historical introduction to Cloud Computing and security principles is provided as well for those unfamiliar with the field. The analysis results show that the framework can resolve the trade-off problem by mitigating and eliminating the risks while preserving and enhancing the benefits of using Cloud storage. However, it requires higher total storage space due to the redundancy added by the erasure codes. The storage provider selection criteria will minimize the total storage costs even with the added redundancies, and minimize risks

    Science handbook

    Get PDF
    2002 handbook for the faculty of Scienc
    corecore