540 research outputs found

    Web Server Performance of Apache and Nginx: A Systematic Literature Review

    Get PDF
    Web Server performance is cardinal to effective and efficient Information communication. Performance measures include response time and service rate, memory usage, CPU utilization among others. A review of various studies indicates a close comparison among different web servers that included Apache, IIS, Nginx and Lighttpd among others. The results of various studies indicates that response time, CPU utilization and memory usage varied with different web servers depending on the model used. However, it was found that Nginx out performed Apache on many metrics that included response time, CPU utilization and memory usage. Nginx performance under these metrics showed that its memory (in case of memory) does not increase with increased requests. It was concluded that though Nginx out performed Apache, both web servers are powerful, flexible and capable and the decision of which web server to adopt is entirely dependent on the need of the user. Since some metric such as uptime (the amount of time that a server stays up and running properly) which reflects the reliability and availability of the server and also landing page speed was not included, we propose that future studies should consider uptime and landing page speed in the testing of web server performance. Keywords: web server, web server performance, apache, Ngin

    A Performance Comparison of Hypervisors for Cloud Computing

    Get PDF
    The virtualization of IT infrastructure enables the consolidation and pooling of IT resources so that they can be shared over diverse applications to offset the limitation of shrinking resources and growing business needs. Virtualization provides a logical abstraction of physical computing resources and creates computing environments that are not restricted by physical configuration or implementation. Virtualization is very important for cloud computing because the delivery of services is simplified by providing a platform for optimizing complex IT resources in a scalable manner, which makes cloud computing more cost effective. Hypervisor plays an important role in the virtualization of hardware. It is a piece of software that provides a virtualized hardware environment to support running multiple operating systems concurrently using one physical server. Cloud computing has to support multiple operating environments and Hypervisor is the ideal delivery mechanism. The intent of this thesis is to quantitatively and qualitatively compare the performance of VMware ESXi 4.1, Citrix Systems Xen Server 5.6 and Ubuntu 11.04 Server KVM Hypervisors using standard benchmark SPECvirt_sc2010v1.01 formulated by Standard Performance Evaluation Corporation (SPEC) under various workloads simulating real life situations

    Topics in Power Usage in Network Services

    Get PDF
    The rapid advance of computing technology has created a world powered by millions of computers. Often these computers are idly consuming energy unnecessarily in spite of all the efforts of hardware manufacturers. This thesis examines proposals to determine when to power down computers without negatively impacting on the service they are used to deliver, compares and contrasts the efficiency of virtualisation with containerisation, and investigates the energy efficiency of the popular cryptocurrency Bitcoin. We begin by examining the current corpus of literature and defining the key terms we need to proceed. Then we propose a technique for improving the energy consumption of servers by moving them into a sleep state and employing a low powered device to act as a proxy in its place. After this we move on to investigate the energy efficiency of virtualisation and compare the energy efficiency of two of the most common means used to do this. Moving on from this we look at the cryptocurrency Bitcoin. We consider the energy consumption of bitcoin mining and if this compared with the value of bitcoin makes this profitable. Finally we conclude by summarising the results and findings of this thesis. This work increases our understanding of some of the challenges of energy efficient computation as well as proposing novel mechanisms to save energy

    An innovative approach to performance metrics calculus in cloud computing environments: a guest-to-host oriented perspective

    Get PDF
    In virtualized systems, the task of profiling and resource monitoring is not straight-forward. Many datacenters perform CPU overcommittment using hypervisors, running multiple virtual machines on a single computer where the total number of virtual CPUs exceeds the total number of physical CPUs available. From a customer point of view, it could be indeed interesting to know if the purchased service levels are effectively respected by the cloud provider. The innovative approach to performance profiling described in this work is based on the use of virtual performance counters, only recently made available by some hypervisors to their virtual machines, to implement guest-wide profiling. Although it isn't possible for the virtual machine to access Virtual Machine Monitor, with this method it is able to gather interesting informations to deduce the state of resource overcommittment of the virtualization host where it is executed. Tests have been carried out inside the compute nodes of FIWARE Genoa Node, an instance of a widely distributed federated community cloud, based on OpenStack and KVM. AgiLab-DITEN, the laboratory I belonged to and where I conducted my studies, together with TnT-Lab\u2013DITEN and CNIT-GE-Unit designed, installed and configured the whole Genoa Node, that was hosted on DITEN-UniGE equipment rooms. All the software measuring instruments, operating systems and programs used in this research are publicly available and free, and can be easily installed in a micro instance of virtual machine, rapidly deployable also in public clouds

    Challenges in real-time virtualization and predictable cloud computing

    Get PDF
    Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future

    Data Center Server Virtualization Solution Using Microsoft Hyper-V

    Get PDF
    Cloud Computing has helped businesses scale within minutes and take their services to their customers much faster. Virtualization is considered the core-computing layer of a cloud setup. All the problems a traditional data center environment like space, power, resilience, centralized data management, and rapid deployment of servers as per business need have been solved with the introduction of Hyper-V (a server virtualization solution from Microsoft). Now companies can deploy multiple servers and applications with just a click and they can also centrally manage the data storage. This paper focuses on the difference between VMware and Hyper virtualization platforms and building a virtualized infrastructure solution using Hyper

    Building the Infrastructure for Cloud Security

    Get PDF
    Computer scienc

    Scaling a Kubernetes Cluster

    Get PDF
    Kubernetes is a container orchestration tool that has become widely adopted for deploying and scaling containers. Devatus Oy as well as their subsidiary company Fliq Oy are interested in knowing how containerized applications can be scaled on Kubernetes. The objective of this the-sis is to research how a Kubernetes cluster can be scaled as well as containerized applications running on Kubernetes. This thesis begins with an introduction to necessary background knowledge needed to under-stand what Kubernetes is. Cloud computing and distributed systems are introduced, since Ku-bernetes is a distributed system used in cloud environments for the most part. Furthermore, distributed applications and workloads are introduced through the concept of microservices. The concept of containerizing applications is thoroughly introduced to understand the runtime environment of the applications deployed to Kubernetes. Finally, Kubernetes architecture as well as its main components are introduced to understand how container orchestration works. The research on Kubernetes scalability is divided into three different parts. First part consists of researching how containerized applications can be scaled on Kubernetes. Second part is focused on how the Kubernetes cluster itself can be scaled. The final part consists of load testing one of Fliq’s example REST API applications deployed to a local Kubernetes cluster. The purpose of load testing is to gain further insight into scaling applications running on Kubernetes. Load test results are compared between the initial deployment configurations and after scaling the application. The load test results show that containerized applications can be scaled both vertically and hor-izontally. Vertical scaling can be achieved by increasing the requested and limited CPU and RAM resources for a Pod. Horizontal scaling can be achieved by increasing Pod replicas as well as having a Service in front of the Pods that load balances the incoming traffic. Load test results show that both vertical and horizontal scaling can increase the number of users supported by an application deployed to Kubernetes. Scaling horizontally is preferred for Fliq’s example REST API since it decreased average response time and increased throughput
    • …
    corecore