213 research outputs found

    Efficient data traverse paths for both I/O and computation intensive workloads

    Get PDF
    Virtualization has accomplished standard status in big business IT industry. Regardless of its across the board reception, it is realized that virtualization likewise presents non-minor overhead when executing errands on a virtual machine (VM). Specifically, a consolidated impact from gadget virtualization overhead and CPU planning idleness can cause execution debasement when computation concentrated undertakings and I/O escalated errands are co-situated on a VM. Such impedance causes additional vitality utilization, too. Right now, present Hylics, a novel network ment that empowers proficient data  navigate ways for both I/O and computation escalated remaining tasks at hand. This is accomplished with the network ment of in-memory document framework and system administration at the hypervisor level. A few significant structure issues are pinpointed and tended to during our model execution, including proficient transitional data sharing, network administration offloading, and QoS-mindful memory use the executives. In light of our genuine sending on KVM, Hylics can fundamentally improve computation and I/O execution for hybrid workloads

    Programmable Smart NIC

    Get PDF

    QoS-aware service continuity in the virtualized edge

    Get PDF
    5G systems are envisioned to support numerous delay-sensitive applications such as the tactile Internet, mobile gaming, and augmented reality. Such applications impose new demands on service providers in terms of the quality of service (QoS) provided to the end-users. Achieving these demands in mobile 5G-enabled networks represent a technical and administrative challenge. One of the solutions proposed is to provide cloud computing capabilities at the edge of the network. In such vision, services are cloudified and encapsulated within the virtual machines or containers placed in cloud hosts at the network access layer. To enable ultrashort processing times and immediate service response, fast instantiation, and migration of service instances between edge nodes are mandatory to cope with the consequences of user’s mobility. This paper surveys the techniques proposed for service migration at the edge of the network. We focus on QoS-aware service instantiation and migration approaches, comparing the mechanisms followed and emphasizing their advantages and disadvantages. Then, we highlight the open research challenges still left unhandled.publishe

    Tuple Space Explosion: A Denial-of-Service Attack Against a Software Packet Classifier

    Get PDF
    Efficient and highly available packet classification is fundamental for various security primitives. In this paper, we evaluate whether the de facto Tuple Space Search (TSS) packet classification algorithm used in popular software networking stacks such as the Open vSwitch is robust against low-rate denial-of-service attacks. We present the Tuple Space Explosion (TSE) attack that exploits the fundamental space/time complexity of the TSS algorithm. TSE can degrade the switch performance to 12% of its full capacity with a very low packet rate (0.7 Mbps) when the target only has simple policies such as, "allow some, but drop others". Worse, an adversary with additional partial knowledge of these policies can virtually bring down the target with the same low attack rate. Interestingly, TSE does not generate any specific traffic patterns but only requires arbitrary headers and payloads which makes it particularly hard to detect. Due to the fundamental complexity characteristics of TSS, unfortunately, there seems to be no complete mitigation to the problem. As a long-term solution, we suggest the use of other algorithms (e.g., HaRP) that are not vulnerable to the TSE attack. As a short-term countermeasure, we propose MFCGuard that carefully manages the tuple space and keeps packet classification fast

    Fog Computing

    Get PDF
    Everything that is not a computer, in the traditional sense, is being connected to the Internet. These devices are also referred to as the Internet of Things and they are pressuring the current network infrastructure. Not all devices are intensive data producers and part of them can be used beyond their original intent by sharing their computational resources. The combination of those two factors can be used either to perform insight over the data closer where is originated or extend into new services by making available computational resources, but not exclusively, at the edge of the network. Fog computing is a new computational paradigm that provides those devices a new form of cloud at a closer distance where IoT and other devices with connectivity capabilities can offload computation. In this dissertation, we have explored the fog computing paradigm, and also comparing with other paradigms, namely cloud, and edge computing. Then, we propose a novel architecture that can be used to form or be part of this new paradigm. The implementation was tested on two types of applications. The first application had the main objective of demonstrating the correctness of the implementation while the other application, had the goal of validating the characteristics of fog computing.Tudo o que não é um computador, no sentido tradicional, está sendo conectado à Internet. Esses dispositivos também são chamados de Internet das Coisas e estão pressionando a infraestrutura de rede atual. Nem todos os dispositivos são produtores intensivos de dados e parte deles pode ser usada além de sua intenção original, compartilhando seus recursos computacionais. A combinação desses dois fatores pode ser usada para realizar processamento dos dados mais próximos de onde são originados ou estender para a criação de novos serviços, disponibilizando recursos computacionais periféricos à rede. Fog computing é um novo paradigma computacional que fornece a esses dispositivos uma nova forma de nuvem a uma distância mais próxima, onde “Things” e outros dispositivos com recursos de conectividade possam delegar processamento. Nesta dissertação, exploramos fog computing e também comparamos com outros paradigmas, nomeadamente cloud e edge computing. Em seguida, propomos uma nova arquitetura que pode ser usada para formar ou fazer parte desse novo paradigma. A implementação foi testada em dois tipos de aplicativos. A primeira aplicação teve o objetivo principal de demonstrar a correção da implementação, enquanto a outra aplicação, teve como objetivo validar as características de fog computing

    Enhancing HPC on Virtual Systems in Clouds through Optimizing Virtual Overlay Networks

    Get PDF
    Virtual Ethernet overlay provides a powerful model for realizing virtual distributed and parallel computing systems with strong isolation, portability, and recoverability properties. However, in extremely high throughput and low latency networks, such overlays can suffer from bandwidth and latency limitations, which is of particular concern in HPC environments. Through a careful and quantitative analysis, I iden- tify three core issues limiting performance: delayed and excessive virtual interrupt delivery into guests, copies between host and guest data buffers during encapsulation, and the semantic gap between virtual Ethernet features and underlying physical network features. I propose three novel optimizations in response: optimistic timer- free virtual interrupt injection, zero-copy cut-through data forwarding, and virtual TCP offload. These optimizations improve the latency and bandwidth of the overlay network on 10 Gbps Ethernet and InfiniBand interconnects, resulting in near-native performance for a wide range of microbenchmarks and MPI application benchmarks

    5G mikropalveluiden valvonta Linux kernelin avulla

    Get PDF
    Software industry is adopting a scalable microservice architecture at increasing pace. At the advent of 5G, this introduces major changes for the architectures of telecommunication systems as well. The telecommunications software is moving towards virtualized solutions in form of virtual machines, and more recently, containers. New monitoring solutions have emerged, to efficiently monitor microservices. These tools however can not provide as detailed view to internal functions of the software than what is possible with tools provided by an operating system. Unfortunately, operating system level tracing tools are decreasingly available for the developers or system administrators. This is due to the fact that the virtualized cloud environment, working as a base for microservices, abstracts away the access to the runtime environment of the services. This thesis researches viability of using Linux kernel tooling in microservice monitoring. The viability is explored with a proof of concept container providing access to some of the Linux kernels network monitoring features. The main focus is evaluating the performance overhead caused by the monitor. It was found out that kernel tracing tools have a great potential for providing low overhead tracing data from microservices. However, the low overheads achieved in the networking context could not be reproduced reliably. In the benchmarks, the overhead of tracing rapidly increased as a function of the number of processors used. While the results cannot be generalized out of the networking context, the inconsistency in overhead makes Linux kernel monitoring tools less than ideal applications for a containerized microservice.Ohjelmistoala on yhä suuremmassa määrin siirtymässä skaalautuvien mikropalveluiden käyttöön. 5G:n saapuessa myös tietoliikennejärjestelmien arkkitehtuureissa nähdään suuria muutoksia. Tietoliikennejärjestelmät ovat muun ohjelmistoalan mukana siirtymässä virtualisoituihin ratkaisuihin, kuten virtuaalikoneisiin ja viimeisimpänä kontteihin. Uuden arkkitehtuurin myötä palveluiden valvontaan on syntynyt mikropalveluihin erikoistuneita työkaluja. Nämä työkalut eivät kuitenkaan pysty kilpailemaan käyttöjärjestelmän tarjoamien työkalujen kanssa valvonnan yksityiskohtaisuudessa. Valitettavasti käyttöjärjestelmätason valvontatyökalut ovat arkkitehtuurimuutoksen takia harvemmin ohjelmistokehittäjien ja ylläpitäjien ulottuvilla. Suuri syy tähän on se, että mikropalveluarkkitehtuurin myötä palvelut on virtualisoitu pilveen. Tällöin pääsyä palvelun suoritusympäristöön ei usein ole. Tässä työssä tutkitaan, onko Linux-ytimen valvontatyökalujen hyödyntäminen mikropalveluiden valvonnassa kannattavaa. Kannattavuutta tutkitaan kontissa ajettavalla monitoriprototyypillä, joka tarjoaa pääsyn osaan Linux-ytimen verkonvalvonta-ominaisuuksista. Tutkimuksen pääpaino on selvittää monitorin vaikututus ajossa olevan järjestelmän suorituskykyyn. Tutkimuksessa selvisi, että Linux-ytimen valvontatyökaluilla on optimitilanteessa mahdollista kerätä mikropalveluiden tilaan liittyvää valvontadataa ilman suurta vaikutusta suorituskykyyn. Epäsuotuisassa tilanteessa valvonnan vaikutus nousi kuitenkin merkittävästi. Verkkovalvonnan suhteellisen vaikutuksen havaittiin kasvavan laskentakuormaan käytettyjen prosessorien määrän funktiona. Tuloksia verkkovalvonnasta ei voi suoraan yleistää verkkovalvontakontekstin ulkopuolelle. Valvonnan vaikutuksen kasvun vahva riippuvuus käytetyn isäntäkoneen ominaisuuksista kuitenkin tekee Linux-ytimen valvontatyökaluista epäideaalin ratkaisun mikropalveluiden valvontaan
    corecore