140,974 research outputs found

    Software platform virtualization in chemistry research and university teaching

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories.</p> <p>Results</p> <p>Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs.</p> <p>Conclusion</p> <p>Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.</p

    Performance Analysis of NFS Protocol Usage on VMware ESXi Datastore

    Full text link
    Hypervisor virtualization that uses bare metal architecture allows to allocate and provide resources for each created virtual machines. Resources such as: CPU and memory, can be added or upgraded anytime to the host hardware (virtualization server) to be able to create more virtual machines. However, upgrading the hard drive size cannot be done anytime if there are already have data or virtual machine that has fully operated on the host hardware, related to the raid system and the establishment of hard drive partition.Upgrading hard drive size on virtualization server can be done by using NFS protocol on NAS server. VSphere ESXi able to use NFS protocol and store the virtual disk that is used by virtual machine as guest operating system on network storage besides using local hard drive (host hardware hard drive). When the virtual machine want to run the guest operating system, it will request to write/read virtual disk there is stored on NAS by using NFS protocol through the network.In this research, measurements has been taken on data communication performance due the USAge of NFS as virtual machine's datastore in addition to local hard drive USAge on server's device. Measurements were performed by sending various data size from client to server (virtual machine) and measure write/read speed on server's cache memory and hard drive, and measure RTT (Round-trip Time) delay between client-server. The testing has been conducted on virtual machines that use local drive and NFS as virtual disk datastore

    Campus Cloud Computing Testbed at Diponegoro University

    Get PDF
    This paper aims to develop a cloud computing testbed on Undip campus. The testbed will be used as a reliable and a scalable ICT infrastructure for Undip servers, such as web, repository and database servers. Each server is implemented by using Ubuntu server OS and run on a virtual machine. Virtual machines are generated from the virtual infrastructure by KVM hypervisor. The testbed implements one computer as the cloud server and two computers as the computing nodes. It gives three virtual machines which run web server, Ubuntu repository server, and SQL database server. Testing of its functionality, reliability and scalability is performed by deploying and operating the cloud system on testbed that connected to Undip global network. The system ability to increase or decrease its resource capacity shows that it is ready to use it as the next-generation of Undip ICT infrastructure

    Design, implementation, and testing of advanced virtual coordinate-measuring machines

    Get PDF
    Copyright @ 2011 IEEE. This article has been made available through the Brunel Open Access Publishing Fund.Advanced virtual coordinate-measuring machines (CMMs) (AVCMMs) have recently been developed at Brunel University, which provide vivid graphical representation and powerful simulation of CMM operations, together with Monte-Carlo-based uncertainty evaluation. In an integrated virtual environment, the user can plan an inspection strategy for a given task, carry out virtual measurements, and evaluate the uncertainty associated with the measurement results, all without the need of using a physical machine. The obtained estimate of uncertainty can serve as a rapid feedback for the user to optimize the inspection plan in the AVCMM before actual measurements or as an evaluation of the measurement results performed. This paper details the methodology, design, and implementation of the AVCMM system, including CMM modeling, probe contact and collision detection, error modeling and simulation, and uncertainty evaluation. This paper further reports experimental results for the testing of the AVCMM

    AntispamLab - A Tool for Realistic Evaluation of Email Spam Filters

    Get PDF
    The existing tools for testing spam filters evaluate a filter instance by simply feeding it with a stream of emails, possibly also providing a feedback to the filter about the correctness of the detection. In such a scenario the evaluated filter is disconnected from the network of email servers, filters, and users, which makes the approach inappropriate for testing many of the filters that exploit some of the information about spam bulkiness, users' actions and social relations among the users. Corresponding evaluation results might be wrong, because the information that is normally used by the filter is missing, incomplete or inappropriate. In this paper we present a tool for testing spam filters in a very realistic scenario. Our tool consists of a set of Python scripts for unix/linux environment. The tool takes as inputs the filter to be tested and an affordable set of interconnected machines (e.g., PlanetLab machines, or locally created virtual machines). When started from a central place, the tool uses the provided machines to build a network of real email servers, installs instances of the filter, deploys and runs simulated email users and spammers, and computes the detection results statistic. Email servers are implemented using Postfix, a standard linux email server. Only per-email-server filters are currently supported, whereas per-email-client filters testing would require additional tool development. The size of the created emailing network is constrained only by the number of available PlanetLab or virtual machines. The run time is much shorter then the simulated system time, due to a time scaling mechanism. Testing a new filter is as simple as installing one copy of it in a real emailing network, which unifies the jobs of a new filter development, testing and prototyping. As a usage example, we test the SpamAssassin filter

    Docker &It’s Containerization: Popular Evolving Technology and rise of Microservices

    Get PDF
    Traditional software development processes usually result in relatively large teams working on a single, monolithic deployment artifact. It is evident that the application is going to grow in size with an increase in the number of services offered. This might become overwhelming for developers to build and maintain the application codebase and there is a problem that sometimes the application works on the developer system and the same does not work on the testing environment so for this, we tried to work with virtual machines before but unless they have a very powerful and expensive infrastructure.VM supports hardware virtualization. That feels like it is a physical machine in which you can boot any OS. In hypervisor-based virtualization, the virtual machine is not a complete operating system instance but its partial instance of the operating system and hypervisor allows multiple operating systems to share a single hardware host. In this virtualization, every virtual machine (VM) needs a complete operating-system installation including a kernel which makes it massive. The proposed system highlights the role of Container-based virtualization and Docker in shaping the future of Microservice Architecture. Docker is an open-source platform that can be used for building, distributing, and running applications in a portable, lightweight runtime and packaging tool, known as Docker Engine. It also provides Docker Hub, which is a cloud service for sharing applications. Costs can be reduced by replacing the traditional virtual machines with docker containers. Microservices and containers are the modern way of building large, independent, and manageable applications. The adoption of containers will continue to grow and the majority of Microservice applications will be built on the containers in the future

    A DevOps approach to integration of software components in an EU research project

    Get PDF
    We present a description of the development and deployment infrastructure being created to support the integration effort of HARNESS, an EU FP7 project. HARNESS is a multi-partner research project intended to bring the power of heterogeneous resources to the cloud. It consists of a number of different services and technologies that interact with the OpenStack cloud computing platform at various levels. Many of these components are being developed independently by different teams at different locations across Europe, and keeping the work fully integrated is a challenge. We use a combination of Vagrant based virtual machines, Docker containers, and Ansible playbooks to provide a consistent and up-to-date environment to each developer. The same playbooks used to configure local virtual machines are also used to manage a static testbed with heterogeneous compute and storage devices, and to automate ephemeral larger-scale deployments to Grid5000. Access to internal projects is managed by GitLab, and automated testing of services within Docker-based environments and integrated deployments within virtual-machines is provided by Buildbot

    Experimental Study of Remote Job Submission and Execution on LRM through Grid Computing Mechanisms

    Full text link
    Remote job submission and execution is fundamental requirement of distributed computing done using Cluster computing. However, Cluster computing limits usage within a single organization. Grid computing environment can allow use of resources for remote job execution that are available in other organizations. This paper discusses concepts of batch-job execution using LRM and using Grid. The paper discusses two ways of preparing test Grid computing environment that we use for experimental testing of concepts. This paper presents experimental testing of remote job submission and execution mechanisms through LRM specific way and Grid computing ways. Moreover, the paper also discusses various problems faced while working with Grid computing environment and discusses their trouble-shootings. The understanding and experimental testing presented in this paper would become very useful to researchers who are new to the field of job management in Grid.Comment: Fourth International Conference on Advanced Computing & Communication Technologies (ACCT), 201
    • 

    corecore