105 research outputs found

    Ohjelmistokonttien hyödyntäminen pilvipohjaisen mobiiliverkon sovelluksissa

    Get PDF
    Mobile service providers and manufacturers have moved towards virtualized network functions, because the amount of mobile data traffic has increased a lot during the past few years. Virtual machines offer high flexibility and easier management. They also enable flexible scaling, which makes it easier to respond to the varying traffic patterns during the day. However, the traditional virtual machines contain overhead and have reduced performance in most of the operations. One high performing alternative to a virtual machine is a Linux container. Linux containers do not contain additional operating system or any unnecessary services. Containers are isolated user spaces which share host computer's kernel. This makes processes inside them perform almost as well as if they would be running directly on host. Also, the startup time of containers is extremely fast compared to virtual machines. This thesis studies, if Linux containers are suitable for telco applications. The research is conducted via proof-of-concept where parts of an existing telco application are moved to containers. First, the container technology and related tools are discussed. Benefits and requirements of the Linux containers are then studied based on the proof-of-concept. In this thesis, it was found out that containers are suitable for running small parts of the application. For example, the software update and scaling are a much more efficient processes with containers than with virtual machines. However, the isolation is weaker in containers than in virtual machines, and at the moment they are not suitable for applications or environments where strict isolation is a necessity.Mobiilidatan määrä on kasvanut voimakkaasti muutaman viime vuoden aikana. Tämän johdosta mobiiliverkon palveluntarjoajat ja laitevalmistajat ovat alkaneet virtualisoimaan mobiiliverkon laitteita. Virtualisointi tarjoaa joustavuutta ja helpottaa laitteiden hallintaa. Virtualisoinnin avulla mobiiliverkon laitteita voidaan skaalata verkon liikennemäärien mukaan. Virtuaalikoneet sisältävät ohjelmien suorituksen kannalta epäolennaisia palveluita ja niiden suorituskyky on usein heikompi verrattuna tavallisiin tietokoneisiin. Linux-kontit tarjoavat kevyemmän ja suorituskyvyltään tehokkaamman vaihtoehdon virtuaalikoneille. Ne eivät sisällä ylimääräistä käyttöjärjestelmää tai ylimääräisiä palveluita. Kontit ovat eristettyjä alueita käyttöjärjestelmän sisällä ja ne myös jakavat käyttöjärjestelmän ytimen. Tämän ansiosta prosessien suorituskyky kontin sisällä on lähes identtinen kuin ilman kontteja. Konttien käynnistymisaika on myös huomattavasti lyhyempi kuin virtuaalikoneiden. Tässä diplomityössä tutkitaan, soveltuvatko Linux-kontit mobiiliverkon sovellusten suorittamiseen. Tutkimus suoritetaan käytännön esimerkin avulla, jossa erään mobiiliverkon sovelluksen osia suoritetaan konteissa. Aluksi tutkitaan Linux-kontteja, niiden teknologista taustaa sekä niihin liittyviä työkaluja. Tämän jälkeen konttien hyötyjä ja niiden vaatimuksia tutkitaan edellä mainitun käytännön esimerkin avulla. Tässä työssä saatiin selville, että kontit soveltuvat pienien sovelluksen osien suorittamiseen. Esimerkiksi sovelluksen päivitys ja skaalaus on tehokkaampaa kontteja käytettäessä. Konttien eristys on kuitenkin heikompaa kuin virtuaalikoneiden ja tällä hetkellä ne eivät sovellu sovelluksille tai ympäristöihin, joissa vaaditaan vahvaa eristystä

    Green Cloud - Load Balancing, Load Consolidation using VM Migration

    Get PDF
    Recently, cloud computing is a new trend emerging in computer technology with a massive demand from the clients. To meet all requirements, a lot of cloud data centers have been constructed since 2008 when Amazon published their cloud service. The rapidly growing data center leads to the consumption of a tremendous amount of energy even cloud computing has better improved in the performance and energy consumption, but cloud data centers still absorb an immense amount of energy. To raise company’s income annually, the cloud providers start considering green cloud concepts which gives an idea about how to optimize CPU’s usage while guaranteeing the quality of service. Many cloud providers are paying more attention to both load balancing and load consolidation which are two significant components of a cloud data center. Load balancing is taken into account as a vital part of managing income demand, improving the cloud system’s performance. Live virtual machine migration is a technique to perform the dynamic load balancing algorithm. To optimize the cloud data center, three issues are considered: First, how does the cloud cluster distribute the virtual machine (VM) requests from clients to all physical machine (PM) when each computer has a different capacity. Second, what is the solution to make CPU’s usage of all PMs to be nearly equal? Third, how to handle two extreme scenarios: rapidly rising CPU’s usage of a PM due to sudden massive workload requiring VM migration immediately and resources expansion to respond to substantial cloud cluster through VM requests. In this chapter, we provide an approach to work with those issues in the implementation and results. The results indicated that the performance of the cloud cluster was improved significantly. Load consolidation is the reverse process of load balancing which aims to provide sufficient cloud servers to handle the client requests. Based on the advance of live VM migration, cloud data center can consolidate itself without interrupting the cloud service, and superfluous PMs are turned to save mode to reduce the energy consumption. This chapter provides a solution to approach load consolidation including implementation and simulation of cloud servers

    Live-Migration in Cloud Computing Environment

    Get PDF
    O tráfego global de IP aumentou cinco vezes nos últimos cinco anos, e prevê-se que crescerá três vezes nos próximos cinco. Já para o período de 2013 a 2018, anteviu-se que o total do tráfego de IP iria aumentar a sua taxa composta de crescimento anual (CAGR) em, aproximadamente, 3.9 vezes. Assim, os Prestadores de Serviços estão a sofrer com este acréscimo exponencial, que é proveniente do número abismal de dispositivos e utilizadores que estão ligados à Internet, bem como das suas exigências por vários recursos e serviços de rede (como por exemplo, distribuição de conteúdo multimédia, segurança, mobilidade, etc.). Mais especificamente, estes estão com dificuldades em: introduzir novos serviços geradores de receitas; e otimizar e adaptar as suas infraestruturas mais caras, centros de processamento de dados, e redes empresariais e de longa distância (COMpuTIN, 2015). Estas redes continuam a ter sérios problemas (no que toca a agilidade, gestão, mobilidade e no tempo despendido para se adaptarem), que não foram corrigidos até ao momento. Portanto, foram propostos novos modelos de Virtualização de Funções da Rede (NFV) e tecnologias de Redes de Software Definidos (SDN) para solucionar gastos operacionais e de capital não otimizado, e limitações das redes (Lopez, 2014, Hakiri and Berthou, 2015). Para se ultrapassar tais adversidades, o Instituto Europeu de Normas de Telecomunicações (ETSI) e outras organizações propuseram novas arquiteturas de rede. De acordo com o ETSI, a NFV é uma técnica emergente e poderosa, com grande aplicabilidade, e com o objetivo de transformar a maneira como os operadores desenham as redes. Isto é alcançado pela evolução da tecnologia padrão de virtualização TI, de forma a consolidar vários tipos de equipamentos de redes como: servidores de grande volume, routers, switches e armazenamento (Xilouris et al., 2014). Nesta dissertação, foram usadas as soluções mais atuais de SDN e NFV, de forma a produzir um caso de uso que possa solucionar o crescimento do tráfego de rede e a excedência da sua capacidade máxima. Para o desenvolvimento e avalização da solução, foi instalada a plataforma de computação na nuvem OpenStack, de modo a implementar, gerir e testar um caso de uso de Live Migration.Global IP traffic has increased fivefold over the past five years, and will continue increasing threefold over the next five years. The overall IP traffic will grow at a compound annual growth rate (CAGR) nearly 3.9-fold from 2013 to 2018. Service Providers are experiencing the exponential growth of IP traffic that comes from the incredible increased number of devices and users who are connected to the internet along with their demands for various resources and network services like multimedia content distribution, security, mobility and else. Therefore, Service Providers are finding difficult to introduce new revenue generating services, optimize and adapt their expensive infrastructures, data centers, wide-area networks and enterprise networks (COMpuTIN, 2015). The networks continue to have serious known problems, such as, agility, manageability, mobility and time-to-application that have not been successfully addressed so far. Thus, novel Network Function Virtualization (NFV) models and Software-defined Networking (SDN) technologies have been proposed to solve the non-optimal capital and operational expenditures and network’s limitations (Lopez, 2014, Hakiri and Berthou, 2015). In order to solve these issues, the European Telecommunications Standards Institute (ETSI) and other standard organizations are proposing new network architecture approaches. According to ETSI, The Network Functions Virtualization is a powerful emerging technique with widespread applicability, aiming to transform the way that network operators design networks by evolving standard IT virtualization technology to consolidate many network equipment types: high volume servers, routers, switches and storage (Xilouris et al., 2014). In this thesis, the current Software-Defined Networking (SDN) and Network Function Virtualization (NFV) solutions were used in order to make a use case that can address the increasing of network traffic and exceeding its maximum capacity. To develop and evaluate the solution, OpenStack cloud computing platform was installed in order to deploy, manage and test a Live-Migration use-case

    Energy and Performance: Management of Virtual Machines: Provisioning, Placement, and Consolidation

    Get PDF
    Cloud computing is a new computing paradigm that offers scalable storage and compute resources to users on demand through Internet. Public cloud providers operate large-scale data centers around the world to handle a large number of users request. However, data centers consume an immense amount of electrical energy that can lead to high operating costs and carbon emissions. One of the most common and effective method in order to reduce energy consumption is Dynamic Virtual Machines Consolidation (DVMC) enabled by the virtualization technology. DVMC dynamically consolidates Virtual Machines (VMs) into the minimum number of active servers and then switches the idle servers into a power-saving mode to save energy. However, maintaining the desired level of Quality-of-Service (QoS) between data centers and their users is critical for satisfying users’ expectations concerning performance. Therefore, the main challenge is to minimize the data center energy consumption while maintaining the required QoS. This thesis address this challenge by presenting novel DVMC approaches to reduce the energy consumption of data centers and improve resource utilization under workload independent quality of service constraints. These approaches can be divided into three main categories: heuristic, meta-heuristic and machine learning. Our first contribution is a heuristic algorithm for solving the DVMC problem. The algorithm uses a linear regression-based prediction model to detect over-loaded servers based on the historical utilization data. Then it migrates some VMs from the over-loaded servers to avoid further performance degradations. Moreover, our algorithm consolidates VMs on fewer number of server for energy saving. The second and third contributions are two novel DVMC algorithms based on the Reinforcement Learning (RL) approach. RL is interesting for highly adaptive and autonomous management in dynamic environments. For this reason, we use RL to solve two main sub-problems in VM consolidation. The first sub-problem is the server power mode detection (sleep or active). The second sub-problem is to find an effective solution for server status detection (overloaded or non-overloaded). The fourth contribution of this thesis is an online optimization meta-heuristic algorithm called Ant Colony System-based Placement Optimization (ACS-PO). ACS is a suitable approach for VM consolidation due to the ease of parallelization, that it is close to the optimal solution, and its polynomial worst-case time complexity. The simulation results show that ACS-PO provides substantial improvement over other heuristic algorithms in reducing energy consumption, the number of VM migrations, and performance degradations. Our fifth contribution is a Hierarchical VM management (HiVM) architecture based on a three-tier data center topology which is very common use in data centers. HiVM has the ability to scale across many thousands of servers with energy efficiency. Our sixth contribution is a Utilization Prediction-aware Best Fit Decreasing (UP-BFD) algorithm. UP-BFD can avoid SLA violations and needless migrations by taking into consideration the current and predicted future resource requirements for allocation, consolidation, and placement of VMs. Finally, the seventh and the last contribution is a novel Self-Adaptive Resource Management System (SARMS) in data centers. To achieve scalability, SARMS uses a hierarchical architecture that is partially inspired from HiVM. Moreover, SARMS provides self-adaptive ability for resource management by dynamically adjusting the utilization thresholds for each server in data centers.Siirretty Doriast
    corecore