130 research outputs found

    Analysis and implementation of load balancers in real-time bidding

    Get PDF
    This report reflects on the best way to implement a software component which defines the balancer module, that is a specific module meant to spread the traffic a web-platform receives over multiple back-end servers. In particular, the discussion will be centered on which load-balancing algorithm and tool is the best by the point of view of a high-demand throughput system in order to avoid the overload of some compute nodes, considering that many open-source load-balancers can be found in the market in a great variety of forms, implementations and features; the focus will be over the needs of a Demand Side Platform, where performances are put at first place and the internals of the platform itself change constantly (such as the number of servers and the addresses of the servers itself). This research will be conducted following best-practices in Software Engineering and Research field, with the purpose to aggregate the various learning contributions gathered during my Double Degree experience among Barcelona and Torino. First, a background over the topic is provided, with a glance to the RTB world and the main concept that this kind of system deploys and an insight over the internals of the balancer component by means of proxy models and load-balancing strategies. Second, a preliminary research over the main software solutions will be conducted, with the aim of filtering the ones that don't match the requirements provided by a professional tech company; the documentation supplied by each balancer will be analyzed with the objective to fill a software evaluation matrix, provided to highlight the various feature supplied by each balancer and to discard faulty solutions. Then, a testing environment will be built for every solution still under evaluation in order to effectively check that the component respects the declared features. Moreover, the testing environment is exploited to discover which is the best software product by means of overall performances, requirement considered crucial for a low-latency-high-throughput platform; the final goal of this step is to provide a winner to the software selection process that will be implemented in the final step by means of stressing the limit of the softwares under evaluation both by means of incoming total connections and requests per second. Finally, the ultimate candidate will be implemented inside the platform environment: it will be installed and configured over the Infrastructure as a Service that hosts the Demand Side Platform environment, mapping the agents described later in the discussion with the actual final component's configuration file. In conclusion, the final goal is to observe the effects that this analysis and the consequent implementation over the production environment metrics will cause, with the objective to improve the quality of service of the back-end by reducing the average response times from servers side and to show a possible decrease of the infrastructure costs

    Multistage Switching Architectures for Software Routers

    Get PDF
    Software routers based on personal computer (PC) architectures are becoming an important alternative to proprietary and expensive network devices. However, software routers suffer from many limitations of the PC architecture, including, among others, limited bus and central processing unit (CPU) bandwidth, high memory access latency, limited scalability in terms of number of network interface cards, and lack of resilience mechanisms. Multistage PC-based architectures can be an interesting alternative since they permit us to i) increase the performance of single software routers, ii) scale router size, iii) distribute packet manipulation and control functionality, iv) recover from single-component failures, and v) incrementally upgrade router performance. We propose a specific multistage architecture, exploiting PC-based routers as switching elements, to build a high-speed, largesize,scalable, and reliable software router. A small-scale prototype of the multistage router is currently up and running in our labs, and performance evaluation is under wa

    Experimental setup for investigating the efficient load balancing algorithms on virtual cloud

    Get PDF
    Cloud computing has emerged as the primary choice for developers in developing applications that require high-performance computing. Virtualization technology has helped in the distribution of resources to multiple users. Increased use of cloud infrastructure has led to the challenge of developing a load balancing mechanism to provide optimized use of resources and better performance. Round robin and least connections load balancing algorithms have been developed to allocate user requests across a cluster of servers in the cloud in a time-bound manner. In this paper, we have applied the round robin and least connections approach of load balancing to HAProxy, virtual machine clusters and web servers. The experimental results are visualized and summarized using Apache Jmeter and a further comparative study of round robin and least connections is also depicted. Experimental setup and results show that the round robin algorithm performs better as compared to the least connections algorithm in all measuring parameters of load balancer in this paper

    FLICK: developing and running application-specific network services

    Get PDF
    Data centre networks are increasingly programmable, with application-specific network services proliferating, from custom load-balancers to middleboxes providing caching and aggregation. Developers must currently implement these services using traditional low-level APIs, which neither support natural operations on application data nor provide efficient performance isolation. We describe FLICK, a framework for the programming and execution of application-specific network services on multi-core CPUs. Developers write network services in the FLICK language, which offers high-level processing constructs and application-relevant data types. FLICK programs are translated automatically to efficient, parallel task graphs, implemented in C++ on top of a user-space TCP stack. Task graphs have bounded resource usage at runtime, which means that the graphs of multiple services can execute concurrently without interference using cooperative scheduling. We evaluate FLICK with several services (an HTTP load-balancer, a Memcached router and a Hadoop data aggregator), showing that it achieves good performance while reducing development effort

    Load Balancing Algorithms In Software Defined Network

    Get PDF
    Compared with the traditional networks, the SDN networks have shown great advantages in many aspects, but also exist the problem of the load imbalance. If the load distribution uneven in the SDN networks, it will greatly affect the performance of network. Many SDN-based load balancing strategies have been proposed to improve the performance of the SDN networks. Therefore, in this paper a finding form comprehensive review help to improve further understanding of lead b balancing algorithms in SDN

    Control Strategies for Improving Cloud Service Robustness

    Get PDF
    This thesis addresses challenges in increasing the robustness of cloud-deployed applications and services to unexpected events and dynamic workloads. Without precautions, hardware failures and unpredictable large traffic variations can quickly degrade the performance of an application due to mismatch between provisioned resources and capacity needs. Similarly, disasters, such as power outages and fire, are unexpected events on larger scale that threatens the integrity of the underlying infrastructure on which an application is deployed.First, the self-adaptive software concept of brownout is extended to replicated cloud applications. By monitoring the performance of each application replica, brownout is able to counteract temporary overload situations by reducing the computational complexity of jobs entering the system. To avoid existing load balancers interfering with the brownout functionality, brownout-aware load balancers are introduced. Simulation experiments show that the proposed load balancers outperform existing load balancers in providing a high quality of service to as many end users as possible. Experiments in a testbed environment further show how a replicated brownout-enabled application is able to maintain high performance during overloads as compared to its non-brownout equivalent.Next, a feedback controller for cloud autoscaling is introduced. Using a novel way of modeling the dynamics of typical cloud application, a mechanism similar to the classical Smith predictor to compensate for delays in reconfiguring resource provisioning is presented. Simulation experiments show that the feedback controller is able to achieve faster control of the response times of a cloud application as compared to a threshold-based controller.Finally, a solution for handling the trade-off between performance and disaster tolerance for geo-replicated cloud applications is introduced. An automated mechanism for differentiating application traffic and replication traffic, and dynamically managing their bandwidth allocations using an MPC controller is presented and evaluated in simulation. Comparisons with commonly used static approaches reveal that the proposed solution in overload situations provides increased flexibility in managing the trade-off between performance and data consistency

    Towards an Automatic Microservices Manager for Hybrid Cloud Edge Environments

    Get PDF
    Cloud computing came to make computing resources easier to access thus helping a faster deployment of applications/services benefiting from the scalability provided by the service providers. It has been registered an exponential growth of the data volume received by the cloud. This is due to the fact that almost every device used in everyday life are connected to the internet sharing information in a global scale (ex: smartwatches, clocks, cars, industrial equipment’s). Increasing the data volume results in an increased latency in client applications resulting in the degradation of the QoS (Quality of service). With these problems, hybrid systems were born by integrating the cloud resources with the various edge devices between the cloud and edge, Fog/Edge computation. These devices are very heterogeneous, with different resources capabilities (such as memory and computational power), and geographically distributed. Software architectures also evolved and microservice architecture emerged to make application development more flexible and increase their scalability. The Microservices architecture comprehends decomposing monolithic applications into small services each one with a specific functionality and that can be independently developed, deployed and scaled. Due to their small size, microservices are adquate for deployment on Hybrid Cloud/Edge infrastructures. However, the heterogeneity of those deployment locations makes microservices’ management and monitoring rather complex. Monitoring, in particular, is essential when considering that microservices may be replicated and migrated in the cloud/edge infrastructure. The main problem this dissertation aims to contribute is to build an automatic system of microservices management that can be deployed in hybrid infrastructures cloud/fog computing. Such automatic system will allow edge enabled applications to have an adaptive deployment at runtime in response to variations inworkloads and computational resources available. Towards this end, this work is a first step on integrating two existing projects that combined may support an automatic system. One project does the automatic management of microservices but uses only an heavy monitor, Prometheus, as a cloud monitor. The second project is a light adaptive monitor. This thesis integrates the light monitor into the automatic manager of microservices.A computação na Cloud surgiu como forma de simplificar o acesso aos recursos computacionais, permitindo um deployment mais rápido das aplicações e serviços como resultado da escalabilidade suportada pelos provedores de serviços. Computação na cloud surgiu para facilitar o acesso aos recursos de computação provocando um facultamento no deployment de aplicações/serviços sendo benéfico para a escalabilidade fornecida pelos provedores de serviços. Tem-se registado um crescimento exponencial do volume de data que é recebido pela cloud. Este aumento deve-se ao facto de quase todos os dispositivos utilizados no nosso quotidiano estarem conectados à internet (exemplos destes são, relogios, maquinas industriais, carros). Este aumento no volume de dados resulta num aumento da latência para as aplicações cliente, resultando assim numa degradação na qualidade de serviço QoS. Com estes problemas, nasceram os sistemas híbridos, nascidos pela integração dos recursos de cloud com os variados dispositivos presentes no caminho entre a cloud e a periferia denominando-se computação na Edge/Fog (Computação na periferia). Estes dispositivos apresentam uma grande heterogeneidade e são geograficamente muito distribuídos. As arquitecturas dos sistemas também evoluíram emergindo a arquitectura de micro serviços que permitem tornar o desenvolvimento de aplicações não só mais flexivel como para aumentar a sua escalabilidade. A arquitetura de micro serviços consiste na decomposição de aplicações monolíticas em pequenos serviços, onde cada um destes possuí uma funcionalidade específica e que pode ser desenvolvido, lançado e migrado de forma independente. Devido ao seu tamanho os micro serviços são adequados para serem lançados em ambientes de infrastructuras híbridas (cloud e periferia). No entanto, a heterogeneidade da localização para serem lançados torna a gestão e monitorização de micro serviços bastante mais complexa. A monitorização, em particular, é essencial quando consideramos que os micro serviços podem ser replicados e migrados nestas infrastruturas de cloud e periferia (Edge). O problema abordado nesta dissertação é contribuir para a construção de um sistema automático de gestão de micro serviços que podem ser lançados em estruturas hibridas. Este sistema automático irá tornar possível às aplicações que estão na edge possuírem um deploy adaptativo enquanto estão em execução, como resposta às variações dos recursos computacionais disponíveis e suas cargas. Para chegar a este fim, este trabalho será o primeiro passo na integração de dois projectos já existentes que, juntos poderão suportar umsistema automático. Umdeles realiza a gestão automática de micro serviços mas utiliza apenas o Prometheus como monitor na cloud, enquanto o segundo projecto é um monitor leve adaptativo. Esta tese integra então um monitor leve com um gestor automático de micro serviços

    Modeling and Control of Server-based Systems

    Get PDF
    When deploying networked computing-based applications, proper resource management of the server-side resources is essential for maintaining quality of service and cost efficiency. The work presented in this thesis is based on six papers, all investigating problems that relate to resource management of server-based systems. Using a queueing system approach we model the performance of a database system being subjected to write-heavy traffic. We then evaluate the model using simulations and validate that it accurately mimics the behavior of a real test bed. In collaboration with Ericsson we model and design a per-request admission control scheme for a Mobile Service Support System (MSS). The model is then validated and the control scheme is evaluated in a test bed. Also, we investigate the feasibility to estimate the state of a server in an MSS using an event-based Extended Kalman Filter. In the brownout paradigm of server resource management, the amount of work required to serve a client is adjusted to compensate for temporary resource shortages. In this thesis we investigate how to perform load balancing over self-adaptive server instances. The load balancing schemes are evaluated in both simulations and test bed experiments. Further, we investigate how to employ delay-compensated feedback control to automatically adjust the amount of resources to deploy to a cloud application in the presence of a large, stochastic delay. The delay-compensated control scheme is evaluated in simulations and the conclusion is that it can be made fast and responsive compared to an industry-standard solution
    corecore