771 research outputs found

    LightBox: Full-stack Protected Stateful Middlebox at Lightning Speed

    Full text link
    Running off-site software middleboxes at third-party service providers has been a popular practice. However, routing large volumes of raw traffic, which may carry sensitive information, to a remote site for processing raises severe security concerns. Prior solutions often abstract away important factors pertinent to real-world deployment. In particular, they overlook the significance of metadata protection and stateful processing. Unprotected traffic metadata like low-level headers, size and count, can be exploited to learn supposedly encrypted application contents. Meanwhile, tracking the states of 100,000s of flows concurrently is often indispensable in production-level middleboxes deployed at real networks. We present LightBox, the first system that can drive off-site middleboxes at near-native speed with stateful processing and the most comprehensive protection to date. Built upon commodity trusted hardware, Intel SGX, LightBox is the product of our systematic investigation of how to overcome the inherent limitations of secure enclaves using domain knowledge and customization. First, we introduce an elegant virtual network interface that allows convenient access to fully protected packets at line rate without leaving the enclave, as if from the trusted source network. Second, we provide complete flow state management for efficient stateful processing, by tailoring a set of data structures and algorithms optimized for the highly constrained enclave space. Extensive evaluations demonstrate that LightBox, with all security benefits, can achieve 10Gbps packet I/O, and that with case studies on three stateful middleboxes, it can operate at near-native speed.Comment: Accepted at ACM CCS 201

    Exploring traffic and QoS management mechanisms to support mobile cloud computing using service localisation in heterogeneous environments

    Get PDF
    In recent years, mobile devices have evolved to support an amalgam of multimedia applications and content. However, the small size of these devices poses a limit the amount of local computing resources. The emergence of Cloud technology has set the ground for an era of task offloading for mobile devices and we are now seeing the deployment of applications that make more extensive use of Cloud processing as a means of augmenting the capabilities of mobiles. Mobile Cloud Computing is the term used to describe the convergence of these technologies towards applications and mechanisms that offload tasks from mobile devices to the Cloud. In order for mobile devices to access Cloud resources and successfully offload tasks there, a solution for constant and reliable connectivity is required. The proliferation of wireless technology ensures that networks are available almost everywhere in an urban environment and mobile devices can stay connected to a network at all times. However, user mobility is often the cause of intermittent connectivity that affects the performance of applications and ultimately degrades the user experience. 5th Generation Networks are introducing mechanisms that enable constant and reliable connectivity through seamless handovers between networks and provide the foundation for a tighter coupling between Cloud resources and mobiles. This convergence of technologies creates new challenges in the areas of traffic management and QoS provisioning. The constant connectivity to and reliance of mobile devices on Cloud resources have the potential of creating large traffic flows between networks. Furthermore, depending on the type of application generating the traffic flow, very strict QoS may be required from the networks as suboptimal performance may severely degrade an application’s functionality. In this thesis, I propose a new service delivery framework, centred on the convergence of Mobile Cloud Computing and 5G networks for the purpose of optimising service delivery in a mobile environment. The framework is used as a guideline for identifying different aspects of service delivery in a mobile environment and for providing a path for future research in this field. The focus of the thesis is placed on the service delivery mechanisms that are responsible for optimising the QoS and managing network traffic. I present a solution for managing traffic through dynamic service localisation according to user mobility and device connectivity. I implement a prototype of the solution in a virtualised environment as a proof of concept and demonstrate the functionality and results gathered from experimentation. Finally, I present a new approach to modelling network performance by taking into account user mobility. The model considers the overall performance of a persistent connection as the mobile node switches between different networks. Results from the model can be used to determine which networks will negatively affect application performance and what impact they will have for the duration of the user's movement. The proposed model is evaluated using an analytical approac

    Leveraging virtualization technologies for resource partitioning in mixed criticality systems

    Get PDF
    Multi- and many-core processors are becoming increasingly popular in embedded systems. Many of these processors now feature hardware virtualization capabilities, such as the ARM Cortex A15, and x86 processors with Intel VT-x or AMD-V support. Hardware virtualization offers opportunities to partition physical resources, including processor cores, memory and I/O devices amongst guest virtual machines. Mixed criticality systems and services can then co-exist on the same platform in separate virtual machines. However, traditional virtual machine systems are too expensive because of the costs of trapping into hypervisors to multiplex and manage machine physical resources on behalf of separate guests. For example, hypervisors are needed to schedule separate VMs on physical processor cores. Additionally, traditional hypervisors have memory footprints that are often too large for many embedded computing systems. This dissertation presents the design of the Quest-V separation kernel, which partitions services of different criticality levels across separate virtual machines, or sandboxes. Each sandbox encapsulates a subset of machine physical resources that it manages without requiring intervention of a hypervisor. In Quest-V, a hypervisor is not needed for normal operation, except to bootstrap the system and establish communication channels between sandboxes. This approach not only reduces the memory footprint of the most privileged protection domain, it removes it from the control path during normal system operation, thereby heightening security

    Design and deployment of real scenarios of TCP/IP networking and it security for software defined networks with next generation tools

    Get PDF
    This thesis is about NSX, a Software Defined tool provided by VMware, to deploy and design virtual networks. The recent growth in the marked pushed companies to invest and use this kind of technology. This thesis explains three main NSX concepts and the basis to perform some deployments. Some use cases regarding networking and security are included in this document. The purpose of these use cases is to use them in real scenarios, which is the main purpose of the thesis. The budget to deploy these use cases is included as an estimation about how much a project like this would cost for the company. Finally, there are some conclusions and tips for best practices

    Optimization of the Migration of Virtual Machines Over a Bipartite Mesh Network Topology

    Get PDF
    In today's society, the core network is becoming increasingly important to provide support for the ever growing number of end users as well as the applications that are required to run. While network technology continues to evolve, new topologies are formed to help optimize traffic and communication. One such topology is a bipartite mesh topology, a partial mesh which allows for a two hop distance for any source-destination pair with normal operation. Another trend that requires a good backend network is the act of virtualization, or creating virtual machines to run on configured hosts. One of the key aspects of the virtualization technology is the migration of virtual machines, moving them from one host to another via the network to increase performance or ease resource usage. Migration is a complicated procedure which has to be done quickly to avoid down time, so seeking ways to decrease this time of transfer is important. In today's environments, migration is only done by considering the hosts that it can move to and does not take the network into account. A way to help optimize the migration of virtual machines, especially over a bipartite mesh network, is to take the network state into account and to help minimize the congestion and the traffic on the network created by the migration. This thesis explores the background and technical workings of virtual machines as of present day and debates the concept of 'cold' migration against the concept of 'live' migration, putting it into perspective of the network and how exactly these migrations are accomplished. This thesis also explores the bipartite mesh network and its operation, including how it should be operated efficiently. Every network is subject to link failures, however with this type of network, the number of failed links must be bounded to the number of spine switches in the topology, which also bounds the maximum number of hops from a source to a destination, though reaching the bound for failed links does not necessarily imply that the maximum number of hops will be reached. Utilizing these bounds and the information gleamed from the virtualization, the primary question of how to optimize the migration of virtual machines over this bipartite mesh topology is formed and examined. These solutions involve a 'network first' approach which examines the state of the network, finds the shortest path destination and only then looks at the resources on the host to determine whether the destination can accept the virtual machine being transferred, and a 'hypervisor first' approach which chooses a destination based on host resources and only then considers the network state and how far the destination is logically from the source. Both solutions have merits and drawbacks, and they are examined; the network first approach is more complicated from a development point of view and requires more back and forth traffic over the network but provides the best optimization in terms of transfer time for the migrating virtual machine, while the hypervisor first approach does not guarantee the best optimization, operating on a threshold of whether the destination is within acceptable parameters. This threshold can easily be seen as the number of spine nodes + 1 and as such, requires little to no computation or communication over the network, unlike the network first approach. These solutions can be fully realized utilizing the OpenStack cloud suite, which, as an open source alternative to virtual machine managers from Microsoft or VMWare, can be modified to do extensive testing on these solutions to determine what is more feasible

    Optimization of the Migration of Virtual Machines Over a Bipartite Mesh Network Topology

    Get PDF
    In today's society, the core network is becoming increasingly important to provide support for the ever growing number of end users as well as the applications that are required to run. While network technology continues to evolve, new topologies are formed to help optimize traffic and communication. One such topology is a bipartite mesh topology, a partial mesh which allows for a two hop distance for any source-destination pair with normal operation. Another trend that requires a good backend network is the act of virtualization, or creating virtual machines to run on configured hosts. One of the key aspects of the virtualization technology is the migration of virtual machines, moving them from one host to another via the network to increase performance or ease resource usage. Migration is a complicated procedure which has to be done quickly to avoid down time, so seeking ways to decrease this time of transfer is important. In today's environments, migration is only done by considering the hosts that it can move to and does not take the network into account. A way to help optimize the migration of virtual machines, especially over a bipartite mesh network, is to take the network state into account and to help minimize the congestion and the traffic on the network created by the migration. This thesis explores the background and technical workings of virtual machines as of present day and debates the concept of 'cold' migration against the concept of 'live' migration, putting it into perspective of the network and how exactly these migrations are accomplished. This thesis also explores the bipartite mesh network and its operation, including how it should be operated efficiently. Every network is subject to link failures, however with this type of network, the number of failed links must be bounded to the number of spine switches in the topology, which also bounds the maximum number of hops from a source to a destination, though reaching the bound for failed links does not necessarily imply that the maximum number of hops will be reached. Utilizing these bounds and the information gleamed from the virtualization, the primary question of how to optimize the migration of virtual machines over this bipartite mesh topology is formed and examined. These solutions involve a 'network first' approach which examines the state of the network, finds the shortest path destination and only then looks at the resources on the host to determine whether the destination can accept the virtual machine being transferred, and a 'hypervisor first' approach which chooses a destination based on host resources and only then considers the network state and how far the destination is logically from the source. Both solutions have merits and drawbacks, and they are examined; the network first approach is more complicated from a development point of view and requires more back and forth traffic over the network but provides the best optimization in terms of transfer time for the migrating virtual machine, while the hypervisor first approach does not guarantee the best optimization, operating on a threshold of whether the destination is within acceptable parameters. This threshold can easily be seen as the number of spine nodes + 1 and as such, requires little to no computation or communication over the network, unlike the network first approach. These solutions can be fully realized utilizing the OpenStack cloud suite, which, as an open source alternative to virtual machine managers from Microsoft or VMWare, can be modified to do extensive testing on these solutions to determine what is more feasible
    • …
    corecore