100 research outputs found

    Scalability and performance of a virtualized SAP system

    Get PDF
    Enterprise resource planning systems (ERP), like SAP systems, build the backbone of the business processes in today’s large enterprises. This is why a weak performance of a SAP system tremendously decreases the performance of the user and thus of the enterprise. Today many SAP hosting providers make use of virtualization techniques, but disregard the impact of such solutions. In this paper we focus on the impact of virtualization solutions on the performance of SAP systems and follow a quantitative approach to ascertain several benchmark results. We make four contributions: 1) On the basis of a quantitative investigation we give a recommendation how to configure a SAP system for heavy workload. The recommendation helps to avoid hardware resource shortage. 2) We show that the average performance of a SAP system increases up to +2% if a container-based virtualization solution is used. 3) We show that the performance of a SAP system is decreased up to -33% if a Xen-based virtualization solution is used. 4) On the basis of the quantitative results we give recommendations for a new sizing process in order to meet the requirements for virtualized SAP systems

    Optimizing network performance in virtual machines

    Get PDF
    In recent years, there has been a rapid growth in the adoption of virtual machine technology in data centers and cluster environments. This trend towards server virtualization is driven by two main factors: the savings in hardware cost that can be achieved through the use of virtualization, and the increased flexibility in the management of hardware resources in a cluster environment. An important consequence of server virtualization is the negative impact it has on the networking performance of server applications running in virtual machines (VMs). In this thesis, we address the problem of efficiently virtualizing the network interface in Type-II virtual machine monitors. In the Type-II architecture, the VMM relies on a special, 'host' operating system to provide the device drivers to access I/O devices, and executes the drivers within the host operating system. Using the Xen VMM as an example of this architecture, we identify fundamental performance bottlenecks in the network virtualization architecture of Type-II VMMs. We show that locating the device drivers in a separate host VM is the primary reason for performance degradation in Type-II VMMs, because of two reasons: a) the switching between the guest and the host VM for device driver invocation, and, b) I/O virtualization operations required to transfer packets between the guest and the host address spaces. We present a detailed analysis of the virtualization overheads in the Type-II I/O architecture, and we present three solutions which explore the performance that can be achieved while performing network virtualization at three different levels: in the host OS, in the VMM, and in the NIC hardware. Our first solution consists of a set of packet aggregation optimizations that explores the performance achievable while retaining the Type-II I/O architecture in the Xen VMM. This solution retains the core functionality of I/O virtualization, including device driver execution, in the Xen 'driver domain'. With this set of optimizations, we achieve an improvement by a factor of two to four in the networking performance of Xen guest domains. In our second solution, we move the task of I/O virtualization and device driver execution from the host OS to the Xen hypervisor. We propose a new I/O virtualization architecture, called the TwinDrivers framework, which combines the performance advantages of Type-I VMMs with the safety and software engineering benefits of Type-II VMMs. (In a Type-I VMM, the device driver executes directly in the hypervisor, and gives much better performance than a Type-II VMM). The TwinDrivers architecture results in another factor of two improvements in networking performance for Xen guest domains. Finally, in our third solution, we describe a hardware based approach to network virtualization, in which we move the task of network virtualization into the network interface card (NIC). We develop a specialized network interface (CDNA) which allows guest operating systems running in VMs to directly access a private, virtual context on the NIC for network I/O, bypassing the host OS entirely. This approach also yields performance benefits similar to the TwinDrivers software-only approach. Overall, our solutions help significantly bridge the gap between the network performance in a virtualized environment and a native environment, eventually achieving network performance in a virtual machine within 70% of the native performance

    Optimizing Network Virtualization in Xen

    Get PDF
    BEST PAPER AWARDIn this paper, we propose and evaluate three techniques for optimizing network performance in the Xen virtualized environment. Our techniques retain the basic Xen architecture of locating device drivers in a privileged `driver' domain with access to I/O devices, and providing network access to unprivileged `guest' domains through virtualized network interfaces. First, we redefine the virtual network interfaces of guest domains to incorporate high-level network offfload features available in most modern network cards. We demonstrate the performance benefits of high-level offload functionality in the virtual interface, even when such functionality is not supported in the underlying physical interface. Second, we optimize the implementation of the data transfer path between guest and driver domains. The optimization avoids expensive data remapping operations on the transmit path, and replaces page remapping by data copying on the receive path. Finally, we provide support for guest operating systems to effectively utilize advanced virtual memory features such as superpages and global page mappings. The overall impact of these optimizations is an improvement in transmit performance of guest domains by a factor of 4.4. The receive performance of the driver domain is improved by 35% and reaches within 7% of native Linux performance. The receive performance in guest domains improves by 18%, but still trails the native Linux performance by 61%. We analyse the performance improvements in detail, and quantify the contribution of each optimization to the overall performance

    Perfomance Analysis of the Xen Hypervisor For Virtualizing Network Devices

    Get PDF
    Acknowledging the great potential of virtualization techniques in communication networks, the aim of this project is to understand and analyze the possibilities of virtualization in the network scope. For that reason we set the objectives of the project as follows:• To analyze the different virtualization techniques currently available and to understand their impact in the virtualization process • To identify virtualization tools supporting the above virtualization techniques • To devise a set of scenarios where virtualization can play a role and implement a subset of them for evaluation purposes • To devise a set of performance indexes to evaluate the behaviour of virtual network scenarios. • To select a virtualization tool and run a set of experiments with the virtual network infrastructure • To propose a monitoring mechanism of the usage of resources of each virtual machine • To extrapolate the evaluation results of the proposed tests to more complex scenario

    Improving I/O Performance using Cache as a Service on Cloud

    Get PDF
    Caching is gaining popularity in Cloud world. It is one of the key technologies which plays a major role in bridging the performance gap between memory hierarchies through spatial or temporal localities. In cloud systems, heavy I/O activities are associated with different applications. Due to heavy I/O activities, performance is degrading. If caching is implemented, these applications would be benefited the most. The use of a Cache as a Service (CaaS) model as a cost efficient cache solution to the disk I/O problem. We have built the remote-memory based cache that is pluggable and file system independent to support various configurations. The cloud Server process introduce, pricing model together with the elastic cache system. This will increase the disk I/O performance of the IaaS, and it will reduce the usage of the physical machines. DOI: 10.17762/ijritcc2321-8169.150516

    Time delay and its effect in a virtual lab created using cloud computing

    Get PDF
    The emergence of Cloud Computing, as a model of virtualized physical resources and virtualized infrastructure, offers the opportunity of outsourcing the implementation of a Virtual Lab Manager. Virtual Lab Management has come to be considered the Holy Grail in the deployment and administration of Labs created in a Virtual Environment. With the advent of Cloud Computing new opportunities are developing that promise to cover much of the future in Virtual Labs. Designing network and information labs with real equipment and tools does not make sense from a cost benefit standpoint, as hardware gets obsolete in a short gap of time, therefore replacing real labs with labs in a Virtual environment this days is a must for teaching in information, security and network classes. Choosing an adequate Virtual Lab Environment solves the problem of creating an adequate academic environment where teachers can serve as effective guides for students which will have a lot of freedom and first hand on experience in the learning subject under consideration. A Virtual Lab Manager in a Cloud Computing environment reduces cost even further, but creates some doubts about the time delays inherent in such a technology. After choosing to use the one created by VMLogix for Amazonaws ec2, it was decided to answer a question in this paper: being Virtual Labs a real time application, how it is affected by time delays and bandwidth when accessed from remote places? The same criteria used for video on demand, voice-over-IP or on line business system as used in networks are going to be applied in the presented work although the much interactivity in a Virtual Lab of any kind
    • …
    corecore