32 research outputs found

    Context-based security function orchestration for the network edge

    Get PDF
    Over the last few years the number of interconnected devices has increased dramatically, generating zettabytes of traffic each year. In order to cater to the requirements of end-users, operators have deployed network services to enhance their infrastructure. Nowadays, telecommunications service providers are making use of virtualised, flexible, and cost-effective network-wide services, under what is known as Network Function Virtualisation (NFV). Future network and application requirements necessitate services to be delivered at the edge of the network, in close proximity to end-users, which has the potential to reduce end-to-end latency and minimise the utilisation of the core infrastructure while providing flexible allocation of resources. One class of functionality that NFV facilitates is the rapid deployment of network security services. However, the urgency for assuring connectivity to an ever increasing number of devices as well as their resource-constrained nature, has led to neglecting security principles and best practices. These low-cost devices are often exploited for malicious purposes in targeting the network infrastructure, with recent volumetric Distributed Denial of Service (DDoS) attacks often surpassing 1 terabyte per second of network traffic. The work presented in this thesis aims to identify the unique requirements of security modules implemented as Virtual Network Functions (VNFs), and the associated challenges in providing management and orchestration of complex chains consisting of multiple VNFs The work presented here focuses on deployment, placement, and lifecycle management of microservice-based security VNFs in resource-constrained environments using contextual information on device behaviour. Furthermore, the thesis presents a formulation of the latency-optimal placement of service chains at the network edge, provides an optimal solution using Integer Linear Programming, and an associated near-optimal heuristic solution that is able to solve larger-size problems in reduced time, which can be used in conjunction with context-based security paradigms. The results of this work demonstrate that lightweight security VNFs can be tailored for, and hosted on, a variety of devices, including commodity resource-constrained systems found in edge networks. Furthermore, using a context-based implementation of the management and orchestration of lightweight services enables the deployment of real-world complex security service chains tailored towards the user’s performance demands from the network. Finally, the results of this work show that on-path placement of service chains reduces the end-to-end latency and minimise the number of service-level agreement violations, therefore enabling secure use of latency-critical networks

    Branch Prediction For Network Processors

    Get PDF
    Originally designed to favour flexibility over packet processing performance, the future of the programmable network processor is challenged by the need to meet both increasing line rate as well as providing additional processing capabilities. To meet these requirements, trends within networking research has tended to focus on techniques such as offloading computation intensive tasks to dedicated hardware logic or through increased parallelism. While parallelism retains flexibility, challenges such as load-balancing limit its scope. On the other hand, hardware offloading allows complex algorithms to be implemented at high speed but sacrifice flexibility. To this end, the work in this thesis is focused on a more fundamental aspect of a network processor, the data-plane processing engine. Performing both system modelling and analysis of packet processing functions; the goal of this thesis is to identify and extract salient information regarding the performance of multi-processor workloads. Following on from a traditional software based analysis of programme workloads, we develop a method of modelling and analysing hardware accelerators when applied to network processors. Using this quantitative information, this thesis proposes an architecture which allows deeply pipelined micro-architectures to be implemented on the data-plane while reducing the branch penalty associated with these architectures

    Dynamic Traffic Driven Architectures and Algorithms for Securing Networks

    Get PDF
    The continuous growth in the Internet's size, the amount of data traffic, and the complexity of processing this traffic gives rise to new challenges in building high performance network devices. Such an exponential growth coupledwith the increasing sophistication of attacks, is placing stringent demands on the performance of networked systems (Firewalls). These challengesrequire new designs, architecture and algorithms for the optimization of such systems.The current or classical security of present day Internet is "static" and "oblivious" to traffic dynamics in the network. Hence, there are tremendous efforts towards the design and development of several techniques and strategies to deal with the above shortcomings. Unfortunately, the current solutions have been successful in addressing only some aspects ofsecurity. However, as a whole security remains a major issue. This is primarily due to the lack of adaptation and dynamics in the design of such intrusion detection and mitigation systems.This thesis focuses on the design of architectures and algorithms for theoptimization of such networked systems, to aid not only adaptive and real-time "packet filtering' but also fast "content basedrouting (differentiated services)' in today's data-driven networks.The approach proposed involves a unique combination of algorithmic andarchitectural techniques that aims to outperform all current solutions in termsof adaptiveness, speed of operation (under attack or heavily loaded conditions) andoverall operational cost-effectiveness of such systems. The tools proposed in thisthesis also aim to offer the flexibility to include new approaches, and providethe ability to migrate or deploy additional entities for attack detection and defense

    Parallel and Distributed Processing in the Context of Fog Computing: High Throughput Pattern Matching and Distributed Monitoring

    Get PDF
    With the introduction of the Internet of Things (IoT), physical objects now have cyber counterparts that create and communicate data. Extracting valuable information from that data requires timely and accurate processing, which calls for more efficient, distributed approaches. In order to address this challenge, the fog computing approach has been suggested as an extension to cloud processing. Fog builds on the opportunity to distribute computation to a wider range of possible platforms: data processing can happen at high-end servers in the cloud, at intermediate nodes where the data is aggregated, as well as at the resource-constrained devices that produce the data in the first place.In this work, we focus on efficient utilization of the diverse hardware resources found in the fog and identify and address challenges in computation and communication. To this end, we target two applications that are representative examples of the processing involved across a wide spectrum of computing platforms. First, we address the need for high throughput processing of the increasing network traffic produced by IoT networks. Specifically, we target the processing involved in security applications and develop a new, data parallel algorithm for pattern matching at high rates. We target the vectorization capabilities found in modern, high-end architectures and show how cache locality and data parallelism can achieve up to \textit{three} times higher processing throughput than the state of the art. Second, we focus on the processing involved close to the sources of data. We target the problem of continuously monitoring sensor streams \textemdash a basic building block for many IoT applications. \ua0We show how distributed and communication-efficient monitoring algorithms can fit in real IoT devices and give insights of their behavior in conjunction with the underlying network stack
    corecore