43 research outputs found

    ONCache: A Cache-Based Low-Overhead Container Overlay Network

    Full text link
    Recent years have witnessed a widespread adoption of containers. While containers simplify and accelerate application development, existing container network technologies either incur significant overhead, which hurts performance for distributed applications, or lose flexibility or compatibility, which hinders the widespread deployment in production. We design and implement ONCache (\textbf{O}verlay \textbf{N}etwork \textbf{Cache}), a cache-based container overlay network, to eliminate the overhead while keeping flexibility and compatibility. We carefully analyze the difference between an overlay network and a host network, and find that an overlay network incurs extra packet processing, including encapsulating, intra-host routing, namespace traversing and packet filtering. Fortunately, the extra processing exhibits an \emph{invariance property}, e.g., most packets of the same flow have the same processing results. This property motivates us to cache the extra processing results. With the proposed cache, ONCache significantly reduces the extra overhead while maintaining the same flexibility and compatibility as standard overlay networks. We implement ONCache using eBPF with only 524 lines of code, and deploy ONCache as a plugin of Antrea. With ONCache, container communication achieves similar performance as host communication. Compared to the standard overlay network, ONCache improves the throughput and request-response transaction rate by 12\% and 36\% for TCP (20\% and 34\% for UDP), while significant reduces per-packet CPU overhead. Many distributed applications also benefit from ONCache

    Distributed services across the network from edge to core

    Get PDF
    The current internet architecture is evolving from a simple carrier of bits to a platform able to provide multiple complex services running across the entire Network Service Provider (NSP) infrastructure. This calls for increased flexibility in resource management and allocation to provide dedicated, on-demand network services, leveraging a distributed infrastructure consisting of heterogeneous devices. More specifically, NSPs rely on a plethora of low-cost Customer Premise Equipment (CPE), as well as more powerful appliances at the edge of the network and in dedicated data-centers. Currently a great research effort is spent to provide this flexibility through Fog computing, Network Functions Virtualization (NFV), and data plane programmability. Fog computing or Edge computing extends the compute and storage capabilities to the edge of the network, closer to the rapidly growing number of connected devices and applications that consume cloud services and generate massive amounts of data. A complementary technology is NFV, a network architecture concept targeting the execution of software Network Functions (NFs) in isolated Virtual Machines (VMs), potentially sharing a pool of general-purpose hosts, rather than running on dedicated hardware (i.e., appliances). Such a solution enables virtual network appliances (i.e., VMs executing network functions) to be provisioned, allocated a different amount of resources, and possibly moved across data centers in little time, which is key in ensuring that the network can keep up with the flexibility in the provisioning and deployment of virtual hosts in today’s virtualized data centers. Moreover, recent advances in networking hardware have introduced new programmable network devices that can efficiently execute complex operations at line rate. As a result, NFs can be (partially or entirely) folded into the network, speeding up the execution of distributed services. The work described in this Ph.D. thesis aims at showing how various network services can be deployed throughout the NSP infrastructure, accommodating to the different hardware capabilities of various appliances, by applying and extending the above-mentioned solutions. First, we consider a data center environment and the deployment of (virtualized) NFs. In this scenario, we introduce a novel methodology for the modelization of different NFs aimed at estimating their performance on different execution platforms. Moreover, we propose to extend the traditional NFV deployment outside of the data center to leverage the entire NSP infrastructure. This can be achieved by integrating native NFs, commonly available in low-cost CPEs, with an existing NFV framework. This facilitates the provision of services that require NFs close to the end user (e.g., IPsec terminator). On the other hand, resource-hungry virtualized NFs are run in the NSP data center, where they can take advantage of the superior computing and storage capabilities. As an application, we also present a novel technique to deploy a distributed service, specifically a web filter, to leverage both the low latency of a CPE and the computational power of a data center. We then show that also the core network, today dedicated solely to packet routing, can be exploited to provide useful services. In particular, we propose a novel method to provide distributed network services in core network devices by means of task distribution and a seamless coordination among the peers involved. The aim is to transform existing network nodes (e.g., routers, switches, access points) into a highly distributed data acquisition and processing platform, which will significantly reduce the storage requirements at the Network Operations Center and the packet duplication overhead. Finally, we propose to use new programmable network devices in data center networks to provide much needed services to distributed applications. By offloading part of the computation directly to the networking hardware, we show that it is possible to reduce both the network traffic and the overall job completion time

    HIP based mobility for Cloudlets

    Get PDF
    Computation offloading can be used to leverage the resources of nearby computers to ease the computational burden of mobile devices. Cloudlets are an approach, where the client's tasks are executed inside a virtual machine (VM) on a nearby computing element, while the client orchestrates the deployment of the VM and the remote execution in it. Mobile devices tend to move, and while moving between networks, their address is prone to change. Should a user bring their device close to a better performing Cloudlet host, migration of the original Cloudlet VM might also be desired, but their address is then prone to change as well. Communication with Cloudlets relies on the TCP/IP networking stack, which resolves address changes by terminating connections, and this seriously impairs the usefulness of Cloudlets in presence of mobility events. We surveyed a number of mobility management protocols, and decided to focus on Host Identity Protocol (HIP). We ported an implementation, HIP for Linux (HIPL), to the Android operating system, and assessed its performance by benchmarking throughput and delay for connection recovery during network migration scenarios. We found that as long as the HIPL hipfw-module, and especially the Local Scope Identifier (LSI) support was not used, the implementation performed adequately in terms of throughput. On the average, the connection recovery delays were tolerable, with an average recovery time of about 8 seconds when roaming between networks. We also found that with highly optimized VM synthesis methods, the recovery time of 8 seconds alone does not make live migration favourable over synthesizing a new VM. We found HIP to be an adequate protocol to support both client mobility and server migration with Cloudlets. Our survey suggests that HIP avoids some of the limitations found in competing protocols. We also found that the HIPL implementation could benefit from architectural changes, for improving the performance of the LSI support.Liikkuvassa tietojenkäsittelyssä laskennan ulkoistaminen on menetelmä, jolla voidaan käyttää ympäristössä olevien tietokoneiden resursseja keventämään mobiililaitteeseen kohdistuvaa laskennallista rasitusta. Cloudletit ovat eräs ratkaisu mobiililaskennan ulkoistamiseen, jossa laitteessa suoritettavia tehtäviä siirretään suoritettavaksi tietokoneessa ajettavaan virtuaalikoneeseen. Mobiililaite ohjaa virtuaalikoneen luomista ja siinä tapahtuvaa laskentaa verkon yli. Mobiililaitteen taipumus liikkua käyttäjänsä mukana aiheuttaa haasteita nykyisen TCP/IP protokollapinon joustavuudelle. Mobiililaitteen siirtyessä verkosta toiseen, on tyypillistä että sen IP-osoite vaihtuu. Mikäli mobiililaite siirtyy lähelle Cloudlet-isäntäkonetta, joka olisi resurssiensa ja tietoliikenneyhteyksiensä puolesta suotuisampi käyttäjän tarpeisiin, voi käyttäjän Cloudlet-virtuaalikoneen siirtäminen olla toivottavaa. Tällöin kuitenkin myös virtuaalikoneen osoite voi vaihtua. TCP/IP ratkaisee osoitteen vaihtumisen katkaisemalla yhteyden, mikä käyttäjien liikkuvuutta rajoittavana tekijänä tekee Cloudlet-ratkaisun käytöstä vähemmän houkuttelevaa. Tässä tutkielmassa tutustuimme joukkoon sopivaksi arvioimiamme liikkuvuutta tukevia protokollia, ja valitsimme niistä HIP -protokollan lähempää tarkastelua varten. Teimme HIP for Linux -protokollaohjelmistosta sovituksen Android-käyttöjärjestelmälle ja tutkimme sen soveltuvuutta liikkuvuuden tukemiseen mittaamalla sen avulla muodostetuilla yhteyksillä saavutettavia siirtonopeuksia sekä yhteyden palautumiseen kuluvaa aikaa osoitteenvaihdosten yhteydessä. Mikäli HIPL:in hipfw-moduuli, ja erityisesti sen LSI-tuki (IPv4-sovellusrajapinta) ei ollut käytössä, mittaustemme mukaan protokollatoteutus suoriutui Cloudlet-käyttöön riittävän hyvin siirtonopeuksien suhteen. Lisäksi yhteyksien palauttaminen osoitteenvaihdosten yhteydessä sujui siedettävässä ajassa, keskimäärin noin kahdeksassa sekunnissa. Hyvin optimoitujen Cloudlet-virtuaalikoneiden synteesimenetelmien vuoksi kahdeksan sekunnin toipumisaika yksinään ei tarjoa virtuaalikoneen siirtämisestä merkittävää etua uuden luomiseen nähden. HIP protokolla soveltuu yhteydenpitoon sekä mobiililaitteesta Cloudlet-isäntäkoneille, että Cloudlet-virtuaalikoneeseen; pienehkön kirjallisuuskatsauksen perusteella muita oleellisia protokollia hieman paremmin. Tunnistimme myös uudistamistarpeen HIPL-toteutuksen arkkitehtuurissa LSI-tuen suorituskyvyn parantamiseksi

    Network traffic management for the next generation Internet

    Get PDF
    Measurement-based performance evaluation of network traffic is a fundamental prerequisite for the provisioning of managed and controlled services in short timescales, as well as for enabling the accountability of network resources. The steady introduction and deployment of the Internet Protocol Next Generation (IPNG-IPv6) promises a network address space that can accommodate any device capable of generating a digital heart-beat. Under such a ubiquitous communication environment, Internet traffic measurement becomes of particular importance, especially for the assured provisioning of differentiated levels of service quality to the different application flows. The non-identical response of flows to the different types of network-imposed performance degradation and the foreseeable expansion of networked devices raise the need for ubiquitous measurement mechanisms that can be equally applicable to different applications and transports. This thesis introduces a new measurement technique that exploits native features of IPv6 to become an integral part of the Internet's operation, and to provide intrinsic support for performance measurements at the universally-present network layer. IPv6 Extension Headers have been used to carry both the triggers that invoke the measurement activity and the instantaneous measurement indicators in-line with the payload data itself, providing a high level of confidence that the behaviour of the real user traffic flows is observed. The in-line measurements mechanism has been critically compared and contrasted to existing measurement techniques, and its design and a software-based prototype implementation have been documented. The developed system has been used to provisionally evaluate numerous performance properties of a diverse set of application flows, over different-capacity IPv6 experimental configurations. Through experimentation and theoretical argumentation, it has been shown that IPv6-based, in-line measurements can form the basis for accurate and low-overhead performance assessment of network traffic flows in short time-scales, by being dynamically deployed where and when required in a multi-service Internet environment.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Scalable QoS routing in MPLS networks using mobile code

    Get PDF
    In a continually evolving Internet, tools such as Q u a lity o f Service ro u tin g must be used in order to accommodate user demands. However, deploying and developing QoS routing in the legacy Internet is difficult. Multiprotocol Label Switching (MPLS) facilitates the deployment of QoS routing, due to its separation of functions between the control and forwarding plane. Developing QoS routing raises scalability issues within very large networks. I propose overcoming these issues by using topology aggregation and distributed routing based on modem techniques such as active networks and mobile agents. However, topology aggregation introduces inaccuracy, which has a negative impact on QoS routing performance. To avoid such problems I propose a hierarchical routing protocol, called Macro-routing, which by using distributed route computation is able to process more detailed information and thus to use the most accurate aggregation technique, i.e. Full-Mesh. Therefore, the protocol is more likely to find the best path between source and destination, and can also find more than one available path. QoS routing, which is used for finding feasible paths that simultaneously satisfy multiple constraints, is also called multiple-constrained routing and is an NP-complete problem. The difficulty of solving such problems increases in a hierarchical context, where aggregation techniques influence the path computation process. I propose a new aggregation technique which allows the selection of multiple paths that satisfy multiple QoS constraints. This reduces the probability of a false negative, i.e., of the routing algorithm incorrectly reporting that no path satisfying the constraints exists. This aggregation technique is called extended full-mesh (EFM) and is intended for use with the Macro-routing protocol. Deploying these protocols in the Internet will allow multi-constrained routing to be practically implemented on large networks

    Security Enhanced Applications for Information Systems

    Get PDF
    Every day, more users access services and electronically transmit information which is usually disseminated over insecure networks and processed by websites and databases, which lack proper security protection mechanisms and tools. This may have an impact on both the users’ trust as well as the reputation of the system’s stakeholders. Designing and implementing security enhanced systems is of vital importance. Therefore, this book aims to present a number of innovative security enhanced applications. It is titled “Security Enhanced Applications for Information Systems” and includes 11 chapters. This book is a quality guide for teaching purposes as well as for young researchers since it presents leading innovative contributions on security enhanced applications on various Information Systems. It involves cases based on the standalone, network and Cloud environments

    Adaptation of the human nervous system for self-aware secure mobile and IoT systems

    Get PDF
    IT systems have been deployed across several domains, such as hospitals and industries, for the management of information and operations. These systems will soon be ubiquitous in every field due to the transition towards the Internet of Things (IoT). The IoT brings devices with sensory functions into IT systems through the process of internetworking. The sensory functions of IoT enable them to generate and process information automatically, either without human contribution or having the least human interaction possible aside from the information and operations management tasks. Security is crucial as it prevents system exploitation. Security has been employed after system implementation, and has rarely been considered as a part of the system. In this dissertation, a novel solution based on a biological approach is presented to embed security as an inalienable part of the system. The proposed solution, in the form of a prototype of the system, is based on the functions of the human nervous system (HNS) in protecting its host from the impacts caused by external or internal changes. The contributions of this work are the derivation of a new system architecture from HNS functionalities and experiments that prove the implementation feasibility and efficiency of the proposed HNS-based architecture through prototype development and evaluation. The first contribution of this work is the adaptation of human nervous system functions to propose a new architecture for IT systems security. The major organs and functions of the HNS are investigated and critical areas are identified for the adaptation process. Several individual system components with similar functions to the HNS are created and grouped to form individual subsystems. The relationship between these components is established in a similar way as in the HNS, resulting in a new system architecture that includes security as a core component. The adapted HNS-based system architecture is employed in two the experiments prove its implementation capability, enhancement of security, and overall system operations. The second contribution is the implementation of the proposed HNS-based security solution in the IoT test-bed. A temperature-monitoring application with an intrusion detection system (IDS) based on the proposed HNS architecture is implemented as part of the test-bed experiment. Contiki OS is used for implementation, and the 6LoWPAN stack is modified during the development process. The application, together with the IDS, has a brain subsystem (BrSS), a spinal cord subsystem (SCSS), and other functions similar to the HNS whose names are changed. The HNS functions are shared between an edge router and resource-constrained devices (RCDs) during implementation. The experiment is evaluated in both test-bed and simulation environments. Zolertia Z1 nodes are used to form a 6LoWPAN network, and an edge router is created by combining Pandaboard and Z1 node for a test-bed setup. Two networks with different numbers of sensor nodes are used as simulation environments in the Cooja simulator. The third contribution of this dissertation is the implementation of the proposed HNS-based architecture in the mobile platform. In this phase, the Android operating system (OS) is selected for experimentation, and the proposed HNS-based architecture is specifically tailored for Android. A context-based dynamically reconfigurable access control system (CoDRA) is developed based on the principles of the refined HNS architecture. CoDRA is implemented through customization of Android OS and evaluated under real-time usage conditions in test-bed environments. During the evaluation, the implemented prototype mimicked the nature of the HNS in securing the application under threat with negligible resource requirements and solved the problems in existing approaches by embedding security within the system. Furthermore, the results of the experiments highlighted the retention of HNS functions after refinement for different IT application areas, especially the IoT, due to its resource-constrained nature, and the implementable capability of our proposed HNS architecture.--- IT-järjestelmiä hyödynnetään tiedon ja toimintojen hallinnassa useilla aloilla, kuten sairaaloissa ja teollisuudessa. Siirtyminen kohti esineiden Internetiä (Internet of Things, IoT) tuo tällaiset laitteet yhä kiinteämmäksi osaksi jokapäiväistä elämää. IT-järjestelmiin liitettyjen IoT-laitteiden sensoritoiminnot mahdollistavat tiedon automaattisen havainnoinnin ja käsittelyn osana suurempaa järjestelmää jopa täysin ilman ihmisen myötävaikutusta, poislukien mahdolliset ylläpito- ja hallintatoimenpiteet. Turvallisuus on ratkaisevan tärkeää IT-järjestelmien luvattoman käytön estämiseksi. Valitettavan usein järjestelmäsuunnittelussa turvallisuus ei ole osana ydinsuunnitteluprosessia, vaan otetaan huomioon vasta käyttöönoton jälkeen. Tässä väitöskirjassa esitellään uudenlainen biologiseen lähestymistapaan perustuva ratkaisu, jolla turvallisuus voidaan sisällyttää erottamattomaksi osaksi järjestelmää. Ehdotettu prototyyppiratkaisu perustuu ihmisen hermoston toimintaan tilanteessa, jossa se suojelee isäntäänsä ulkoisten tai sisäisten muutosten vaikutuksilta. Tämän työn keskeiset tulokset ovat uuden järjestelmäarkkitehtuurin johtaminen ihmisen hermoston toimintaperiaatteesta sekä tällaisen järjestelmän toteutettavuuden ja tehokkuuden arviointi kokeellisen prototyypin kehittämisen ja toiminnan arvioinnin avulla. Tämän väitöskirjan ensimmäinen kontribuutio on ihmisen hermoston toimintoihin perustuva IT-järjestelmäarkkitehtuuri. Tutkimuksessa arvioidaan ihmisen hermoston toimintaa ja tunnistetaan keskeiset toiminnot ja toiminnallisuudet, jotka mall-innetaan osaksi kehitettävää järjestelmää luomalla näitä vastaavat järjestelmäkomponentit. Nä-istä kootaan toiminnallisuudeltaan hermostoa vastaavat osajärjestelmät, joiden keskinäinen toiminta mallintaa ihmisen hermoston toimintaa. Näin luodaan arkkitehtuuri, jonka keskeisenä komponenttina on turvallisuus. Tämän pohjalta toteutetaan kaksi prototyyppijärjestelmää, joiden avulla arvioidaan arkkitehtuurin toteutuskelpoisuutta, turvallisuutta sekä toimintakykyä. Toinen kontribuutio on esitetyn hermostopohjaisen turvallisuusratkaisun toteuttaminen IoT-testialustalla. Kehitettyyn arkkitehtuuriin perustuva ja tunkeutumisen estojärjestelmän (intrusion detection system, IDS) sisältävä lämpötilan seurantasovellus toteutetaan käyttäen Contiki OS -käytöjärjestelmää. 6LoWPAN protokollapinoa muokataan tarpeen mukaan kehitysprosessin aikana. IDS:n lisäksi sovellukseen kuuluu aivo-osajärjestelmä (Brain subsystem, BrSS), selkäydinosajärjestelmä (Spinal cord subsystem, SCSS), sekä muita hermoston kaltaisia toimintoja. Nämä toiminnot jaetaan reunareitittimen ja resurssirajoitteisten laitteiden kesken. Tuloksia arvioidaan sekä simulaatioiden että testialustan tulosten perusteella. Testialustaa varten 6LoWPAN verkon toteutukseen valittiin Zolertia Z1 ja reunareititin on toteutettu Pandaboardin ja Z1:n yhdistelmällä. Cooja-simulaattorissa käytettiin mallinnukseen ymp-äristönä kahta erillistä ja erikokoisuta sensoriverkkoa. Kolmas tämän väitöskirjan kontribuutio on kehitetyn hermostopohjaisen arkkitehtuurin toteuttaminen mobiilialustassa. Toteutuksen alustaksi valitaan Android-käyttöjärjestelmä, ja kehitetty arkkitehtuuri räätälöidään Androidille. Tuloksena on kontekstipohjainen dynaamisesti uudelleen konfiguroitava pääsynvalvontajärjestelmä (context-based dynamically reconfigurable access control system, CoDRA). CoDRA toteutetaan mukauttamalla Androidin käyttöjärjestelmää ja toteutuksen toimivuutta arvioidaan reaaliaikaisissa käyttöolosuhteissa testialustaympäristöissä. Toteutusta arvioitaessa havaittiin, että kehitetty prototyyppi jäljitteli ihmishermoston toimintaa kohdesovelluksen suojaamisessa, suoriutui tehtävästään vähäisillä resurssivaatimuksilla ja onnistui sisällyttämään turvallisuuden järjestelmän ydintoimintoihin. Tulokset osoittivat, että tämän tyyppinen järjestelmä on toteutettavissa sekä sen, että järjestelmän hermostonkaltainen toiminnallisuus säilyy siirryttäessä sovellusalueelta toiselle, erityisesti resursseiltaan rajoittuneissa IoT-järjestelmissä
    corecore