394 research outputs found

    An adaptive admission control and load balancing algorithm for a QoS-aware Web system

    Get PDF
    The main objective of this thesis focuses on the design of an adaptive algorithm for admission control and content-aware load balancing for Web traffic. In order to set the context of this work, several reviews are included to introduce the reader in the background concepts of Web load balancing, admission control and the Internet traffic characteristics that may affect the good performance of a Web site. The admission control and load balancing algorithm described in this thesis manages the distribution of traffic to a Web cluster based on QoS requirements. The goal of the proposed scheduling algorithm is to avoid situations in which the system provides a lower performance than desired due to servers' congestion. This is achieved through the implementation of forecasting calculations. Obviously, the increase of the computational cost of the algorithm results in some overhead. This is the reason for designing an adaptive time slot scheduling that sets the execution times of the algorithm depending on the burstiness that is arriving to the system. Therefore, the predictive scheduling algorithm proposed includes an adaptive overhead control. Once defined the scheduling of the algorithm, we design the admission control module based on throughput predictions. The results obtained by several throughput predictors are compared and one of them is selected to be included in our algorithm. The utilisation level that the Web servers will have in the near future is also forecasted and reserved for each service depending on the Service Level Agreement (SLA). Our load balancing strategy is based on a classical policy. Hence, a comparison of several classical load balancing policies is also included in order to know which of them better fits our algorithm. A simulation model has been designed to obtain the results presented in this thesis

    FLICK: developing and running application-specific network services

    Get PDF
    Data centre networks are increasingly programmable, with application-specific network services proliferating, from custom load-balancers to middleboxes providing caching and aggregation. Developers must currently implement these services using traditional low-level APIs, which neither support natural operations on application data nor provide efficient performance isolation. We describe FLICK, a framework for the programming and execution of application-specific network services on multi-core CPUs. Developers write network services in the FLICK language, which offers high-level processing constructs and application-relevant data types. FLICK programs are translated automatically to efficient, parallel task graphs, implemented in C++ on top of a user-space TCP stack. Task graphs have bounded resource usage at runtime, which means that the graphs of multiple services can execute concurrently without interference using cooperative scheduling. We evaluate FLICK with several services (an HTTP load-balancer, a Memcached router and a Hadoop data aggregator), showing that it achieves good performance while reducing development effort

    A cross-stack, network-centric architectural design for next-generation datacenters

    Get PDF
    This thesis proposes a full-stack, cross-layer datacenter architecture based on in-network computing and near-memory processing paradigms. The proposed datacenter architecture is built atop two principles: (1) utilizing commodity, off-the-shelf hardware (i.e., processor, DRAM, and network devices) with minimal changes to their architecture, and (2) providing a standard interface to the programmers for using the novel hardware. More specifically, the proposed datacenter architecture enables a smart network adapter to collectively compress/decompress data exchange between distributed DNN training nodes and assist the operating system in performing aggressive processor power management. It also deploys specialized memory modules in the servers, capable of performing general-purpose computation and network connectivity. This thesis unlocks the potentials of hardware and operating system co-design in architecting application-transparent, near-data processing hardware for improving datacenter's performance, energy efficiency, and scalability. We evaluate the proposed datacenter architecture using a combination of full-system simulation, FPGA prototyping, and real-system experiments

    Federation of Cyber Ranges

    Get PDF
    Küberkaitse võimekuse aluselemendiks on kõrgete oskustega ja kokku treeninud spetsialistid. Tehnikute, operaatorite ja otsustajate teadlikkust ja oskusi saab treenida läbi rahvusvaheliste õppuste. On mõeldamatu, et kaitse ja rünnakute harjutamiseks kasutatakse toimivat reaalajalist organisatsiooni IT-süsteemi. Päriseluliste süsteemide simuleerimiseks on võimalik kasutada küberharjutusväljakuid.NATO ja Euroopa Liidu liikmesriikides on mitmed juba toimivad ja käimasolevad arendusprojektid uute küberharjutusväljakute loomiseks. Et olemasolevast ressurssi täies mahus kasutada, tuleks kõik sellised harjutusväljakud rahvusvaheliste õppuste tarbeks ühendada. Ühenduvus on võimalik saavutada alles pärast kokkuleppeid, tehnoloogiate ja erinevate harjutusväljakute kitsenduste arvestamist.Antud lõputöö vaatleb kahte küberharjutusväljakut ja uurib võimalusi, kuidas on võimalik rahvuslike harjutusväljakute ressursse jagada ja luua ühendatud testide ja õppuste keskkond rahvusvahelisteks küberkaitseõppusteks. Lõputöö annab soovitusi informatsiooni voogudest, testkontseptsioonidest ja eeldustest, kuidas saavutada ühendused ressursside jagamise võimekusega. Vaadeldakse erinevaid tehnoloogiad ja operatsioonilisi aspekte ning hinnatakse nende mõju.Et paremini mõista harjutusväljakute ühendamist, on üles seatud testkeskkond Eesti ja Tšehhi laborite infrastruktuuride vahel. Testiti erinevaid võrguparameetreid, operatsioone virtuaalmasinatega, virtualiseerimise tehnoloogiad ning keskkonna haldust avatud lähtekoodiga tööriistadega. Testide tulemused olid üllatavad ja positiivsed, muutes ühendatud küberharjutusväljakute kontseptsiooni saavutamise oodatust lihtsamaks.Magistritöö on kirjutatud inglise keeles ja sisaldab teksti 42 leheküljel, 7 peatükki, 12 joonist ja 4 tabelit.Võtmesõnad:Küberharjutusväljak, NATO, ühendamine, virtualiseerimine, rahvusvahelised küberkaitse õppusedAn essential element of the cyber defence capability is highly skilled and well-trained personnel. Enhancing awareness and education of technicians, operators and decision makers can be done through multinational exercises. It is unthinkable to use an operational production environment to train attack and defence of the IT system. For simulating a life like environment, a cyber range can be used. There are many emerging and operational cyber ranges in the EU and NATO. To benefit more from available resources, a federated cyber range environment for multinational cyber defence exercises can be built upon the current facilities. Federation can be achieved after agreements between nations and understanding of the technologies and limitations of different national ranges.This study compares two cyber ranges and looks into possibilities of pooling and sharing of national facilities and to the establishment of a logical federation of interconnected cyber ranges. The thesis gives recommendations on information flow, proof of concept, guide-lines and prerequisites to achieve an initial interconnection with pooling and sharing capabilities. Different technologies and operational aspects are discussed and their impact is analysed. To better understand concepts and assumptions of federation, a test environment with Estonian and Czech national cyber ranges was created. Different aspects of network parameters, virtual machine manipulations, virtualization technologies and open source administration tools were tested. Some surprising and positive outcomes were in the result of the tests, making logical federation technologically easier and more achievable than expected.The thesis is in English and contains 42 pages of text, 7 chapters, 12 figures and 4 tables.Keywords:Cyber Range, NATO, federation, virtualization, multinational cyber defence exercise

    Replication and Caching Systems for the support of VMs stored in File Systems with Snapshots

    Get PDF
    Recently, in a relatively short timeframe, there were fundamental changes in the way computing power is used. Virtualisation technology has changed both the model of a data centre’s infrastructure and the way physical computers are now managed. This shift is a consequence of today’s fast deployment rate of Virtual Machines (VM) in a high consolidation environment with minimal need for human management. New approaches to virtualisation techniques are being developed at a surprisingly fast rate, leading to a new exciting and vibrating ecosystem of platforms and services. We see the big industry players tackling problems such as Desktop Virtualisation with moderate success, but completely ignoring the computation power already present in their clients’ infrastructures and, instead, opting for a costly solution based on powerful new machines. There’s still room for improvement in Virtual Desktop Infrastructure (VDI) and development of new architectures that take advantage of the computation power available at the user’s desk, with a minimum effort on the management side; Infrastructure for Client-Based Desktops (iCBD) is one of these projects. This thesis focuses on the development of mechanisms for the replication and caching of VM images stored in a local filesystem, albeit one with the ability to perform snapshots. In this work, there are some challenges to address: the proposed architecture must be entirely distributed and completely integrated with the already existing client-based VDI platform; and it must be able to efficiently cope with very large, read-only files, (some of them snapshots) and handle their multiple versions. This work will also explore the challenges and advantages of deploying such a system in a high throughput network, with both high availability and scalability while efficiently supporting a large number of users (and their workstations)

    Workload characterization and synthesis for data center optimization

    Get PDF
    corecore