707 research outputs found

    Performance Implications For the Use of Virtual Machines Versus Shielded Virtual Machines in High-Availability Virtualized Infrastructures

    Get PDF
    Use of virtualization in datacenter or service providers or even in cloud computing environments brings many benefits. Virtualization, whether it is for services, applications or servers, is no longer a trend to be a reality in many industries and areas, whether in or outside the technology area. Therefore, with this emergent use of virtualization companies have been asking a lot about the performance and security of using virtual machines in a highly availability infrastructures. Controlling the access to Virtual Machines is a security issues that all the hypervisors haves, such as, VMware vSphere, Hyper-V or KVM. To make virtual machines more secure Microsoft has introduced the concept of Shielded virtual machines. Taking this into account, this dissertation presents a study on key concepts behind virtual machines, Guarded Fabric, Host Guardian Service, Guarded Hosts and shielded virtual machines. A Shielded VM is a Generation 2 feature (supported on Windows Server 2012 and later) that comes with a virtual Trusted Platform Module (TPM), which can only run on healthy and approved hosts in the fabric and is encrypted using BitLocker. In order to support our study an experimental bed test has been setup, involving a failover cluster with native virtualization at the hardware level with Windows Server 2016 Hyper-V. In the test environment, a failover clustering, FreeNAS Storage, ISCSI Target, VMs, Guarded Fabric and Shielded virtual machines have been implemented and configured. After the implementation of the bed test, a set of tests and experiments haves been made in order to study the performance implications for the use of virtual machines versus shielded virtual machine in High Availability Virtualized Infrastructures. Finally, an analysis at the results worked through the tests has been made, according to the Background made in the first part and the bed test deployed. A set of experiments has been made in virtual machines and shielded virtual machines in order to evaluate its performance in terms of CPU, RAM and writing speed. The results show that the use of shielded virtual machines leads to a small degradation of performance compared to the use of regular virtual machines, but, on the other hand, it has also been shown that the shielded virtual machines allows to restrict access to the virtual machines only for run on trusted hosts, and prevent unauthorized administrators and malwares from compromising the virtual machine.O uso da virtualização em centro de dados ou provedores de serviços ou mesmo em ambientes de computação em nuvem traz muitos benefícios. A virtualização, seja para serviços, aplicações ou servidores, não é mais uma tendência a ser uma realidade em muitos setores e áreas, seja dentro ou fora da área de tecnologia. Portanto, com esse uso emergente de virtualização, as empresas têm vindo a questionar muito sobre o desempenho e a segurança do uso de máquinas virtuais em infraestruturas de alta disponibilidade. Controlar o acesso às máquinas virtuais é um problema de segurança que todos os hypervisors possuem, como VmWare vSphere, Hyper-V ou KVM. Para tornar as máquinas virtuais mais seguras, a Microsoft introduziu as máquinas virtuais blindadas. Neste sentido, esta dissertação apresenta um estudo sobre conceitos-chave por trás de máquinas virtuais, Guarded Fabric, Host Guardian Service, Guarded Hosts e máquinas virtuais Blindadas. Uma Máquina Virtual Blindada é um recurso da Geração 2 (com suporte no Windows Server 2012 e posterior) que vem com um Trusted Platform Module (TPM) virtual, e que apenas pode ser executada em hospedeiros protegidos e aprovados na fabric e é criptografada usando o BitLocker. Para dar suporte ao nosso estudo foi configurado um ambiente de teste experimental, envolvendo um failover cluster com virtualização nativa ao nível de hardware com o Windows Server 2016 Hyper-V. No ambiente de teste, foi implementado e configurado um failover cluster, FreeNAS Storage, iSCSI Target, Máquinas Virtuais, Guarded Fabric e Máquinas Virtuais Blindadas. Após a implementação do ambiente de teste, um conjunto de testes e experiências foram realizados para estudar as implicações dos desempenhos das máquinas virtuais versus máquinas virtuais Blindadas em Infraestruturas Virtualizadas de Alta Disponibilidade. Por fim, fizemos a análise nos resultados trabalhados através dos testes, de acordo com os conceitos definidos no segundo capítulo da dissertação e com o ambiente de teste implementado. Um conjunto de experiencias foram realizadas em máquinas virtuais regulares e máquinas virtuais blindadas para avaliar o desempenho em termos de CPU, RAM e velocidade de escrita no disco. Os resultados mostram que o uso de máquinas virtuais blindadas conduz a uma pequena degradação do desempenho em comparação com o uso de máquinas virtuais regulares, mas, por outro lado, também se verificou que as máquinas virtuais blindadas permitem restringir o acesso às máquinas virtuais apenas para correrem em hosts confiáveis, além de impedirem que administradores não autorizados e malwares comprometam a máquina virtual

    Server-Based Desktop Virtualization

    Get PDF
    Virtualization can be accomplished at different layers in the computational stack and with different goals (servers, desktops, applications, storage and network). This research focuses on server-based desktop virtualization. According to the Gartner group, the main business drivers for adopting desktop virtualization are: application compatibility, business continuity, security and compliance, mobility and improved productivity [15]. Despite these business drivers, desktop virtualization has not been widely adopted. According to a survey conducted by Matrix42, only 5% of desktop computers are virtualized [37]. The research deals with the challenges preventing the wider adoption of server-based desktop virtualization while focusing on two of the main virtualization architectures: session-based desktop virtualization (SBDV) and virtual desktop infrastructure (VDI). The first chapter introduces some of the challenges faced by large organizations in their efforts to create a cost effective and manageable desktop computing environment. The second chapter discusses two of the main server-based desktop virtualizations (VDI and SBDV), illustrating some of the advantages and disadvantages in these different architectures. The third chapter focuses on some of the technical challenges and provides recommendations regarding server-based desktop virtualization. In the fourth chapter, measurements are conducted for the utilization and performance of SBDV on different 3 user profiles (light, heavy and multimedia). Data and results collected from desktop assessment and lab are used to formulate baselines and metrics for capacity planning. According to the conducted measurements, it is concluded that light and heavy profiles can be virtualized using SBDV, while for multimedia profiles, additional capacity planning and resource allocation are required. Multimedia profiles can be virtualized with VDI considering client-side rendering to avoid network bandwidth congestion. While the research focuses on VDI and SBDV, it highlights few points related to client access devices (CADs). CADs are one of the main components in the desktop virtualization stack (OS virtualization, session virtualization, application virtualization, connection broker, CADs and user data and profiles). The latter chapter of the research focuses on conclusions and future work toward greater levels of adoption of VDI and SBDV

    Private cloud computing platforms. Analysis and implementation in a Higher Education Institution

    Get PDF
    The constant evolution of the Internet and its increasing use and subsequent entailing to private and public activities, resulting in a strong impact on their survival, originates an emerging technology. Through cloud computing, it is possible to abstract users from the lower layers to the business, focusing only on what is most important to manage and with the advantage of being able to grow (or degrades) resources as needed. The paradigm of cloud arises from the necessity of optimization of IT resources evolving in an emergent and rapidly expanding and technology. In this regard, after a study of the most common cloud platforms and the tactic of the current implementation of the technologies applied at the Institute of Biomedical Sciences of Abel Salazar and Faculty of Pharmacy of Oporto University a proposed evolution is suggested in order adorn certain requirements in the context of cloud computing.atividades privadas e públicas, traduzindo-se num forte impacto à sua sobrevivência, origina uma tecnologia emergente. Através de cloud computing, é possível abstrair os utilizadores das camadas inferiores ao negócio, focalizando apenas no que realmente é mais importante de gerir e ainda com a vantagem de poder crescer (ou diminuir) os recursos conforme as necessidades correntes. Os recursos das TI evoluíram consideravelmente na última década tendo despoletado toda uma nova consciencialização de otimização, originando o paradigma da computação em nuvem. Neste sentido, após um estudo das plataformas de cloud mais comuns, é abordado um case study das tecnologias implementadas no Instituto de Ciências Biomédicas de Abel Salazar e Faculdade de Farmácia da Universidade do Porto seguido de uma sugestão de implementação de algumas plataformas de cloud a fim de adereçar determinados requisitos do case study. Distribuições produzidas especificamente para a implementação de nuvens privadas encontram-se hoje em dia disponíveis e cujas configurações estão amplamente simplificadas. No entanto para que seja viável uma arquitetura bem implementada, quer a nível de hardware, rede, segurança eficiência e eficácia, é pertinente considerar a infraestrutura necessária como um todo. Um estudo multidisciplinar aprofundado sobre todos os temas adjacentes a esta tecnologia está intrinsecamente ligado à arquitetura de um sistema de nuvem, sob pena de se obter um sistema deficitário. É necessário um olhar mais abrangente, para além do equipamento necessário e do software utilizado, que pondere efetivamente os custos de implementação tendo em conta também os recursos humanos especializados nas diversas áreas envolvidas. A construção de um novo centro de dados, fruto da junção dos edifícios do Instituto de Ciências Biomédicas de Abel Salazar e da Faculdade de Farmácia da Universidade do Porto, possibilitou a partilha de recursos tecnológicos. Tendo em conta a infraestrutura existente, completamente escalável, e assente numa abordagem de crescimento e de virtualização, considera-se a implementação de uma nuvem privada já que os recursos existentes são perfeitamente adaptáveis a esta realidade emergente. A tecnologia de virtualização adotada, bem como o respetivo hardware (armazenamento e processamento) foi pensado numa implementação baseada no XEN Server, e considerando que existe heterogeneidade no parque dos servidores e tendo em conta a ideologia das tecnologias disponíveis (aberta e proprietária) é estudada uma abordagem distinta à implementação existente baseada na Microsoft. Dada a natureza da instituição, e dependendo dos recursos necessários e abordagem a tomar, no desenvolvimento de uma nuvem privada, poderá ser levado em conta a integração com nuvens públicas (por exemplo Google Apps), sendo que as possíveis soluções a adotar poderão ser baseadas em tecnologias abertas e/ou pagas (ou ambas). Este trabalho tem como objetivo, em última instância, o desígnio de verificar as tecnologias utilizadas atualmente e identificar potenciais soluções para que em conjunto com a infraestrutura atual, disponibilizar um serviço de nuvem privada. O trabalho inicia-se com uma explicação concisa do conceito de nuvem, comparando com outras formas de computação, expondo as suas características, revendo a sua história, explicando as suas camadas, modelos de implementação e arquiteturas. Em seguida, no capítulo do estado da arte, são abordadas as principais plataformas de computação em nuvem focando o Microsoft Azure, Google Apps, Cloud Foundry, Delta Cloud e Open Stack. São também abordadas outras plataformas que emergem fornecendo assim um olhar mais amplo para as soluções tecnológicas atuais disponíveis. Após o estado da arte, é abordado um estudo de um caso em particular, a implementação do cenário de TI do novo edifício das duas unidades orgânicas da Universidade do Porto, o Instituto de Ciências Biomédicas Abel Salazar e a Faculdade de Farmácia e sua arquitetura de nuvem privada utilizando recursos partilhados. O estudo do caso é seguido de uma sugestão de evolução da implementação, utilizando tecnologias de computação em nuvem de forma a cumprir com os requisitos necessários e integrar e agilizar a infraestrutura existente

    Software platform virtualization in chemistry research and university teaching

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories.</p> <p>Results</p> <p>Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs.</p> <p>Conclusion</p> <p>Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.</p

    Forensicloud: An Architecture for Digital Forensic Analysis in the Cloud

    Get PDF
    The amount of data that must be processed in current digital forensic examinations continues to rise. Both the volume and diversity of data are obstacles to the timely completion of forensic investigations. Additionally, some law enforcement agencies do not have the resources to handle cases of even moderate size. To address these issues we have developed an architecture for a cloud-based distributed processing platform we have named Forensicloud. This architecture is designed to reduce the time taken to process digital evidence by leveraging the power of a high performance computing platform and by adapting existing tools to operate within this environment. Forensicloud’s Software and Infrastructure as a Service service models allow investigators to use remote virtual environments for investigating digital evidence. These environments allow investigators the ability to use licensed and unlicensed tools that they may not have had access to before and allows some of these tools to be run on computing clusters

    Stop Hiding The Sharp Knives: The WebAssembly Linux Interface

    Full text link
    WebAssembly is gaining popularity as a portable binary format targetable from many programming languages. With a well-specified low-level virtual instruction set, minimal memory footprint and many high-performance implementations, it has been successfully adopted for lightweight in-process memory sandboxing in many contexts. Despite these advantages, WebAssembly lacks many standard system interfaces, making it difficult to reuse existing applications. This paper proposes WALI: The WebAssembly Linux Interface, a thin layer over Linux's userspace system calls, creating a new class of virtualization where WebAssembly seamlessly interacts with native processes and the underlying operating system. By virtualizing the lowest level of userspace, WALI offers application portability with little effort and reuses existing compiler backends. With WebAssembly's control flow integrity guarantees, these modules gain an additional level of protection against remote code injection attacks. Furthermore, capability-based APIs can themselves be virtualized and implemented in terms of WALI, improving reuse and robustness through better layering. We present an implementation of WALI in a modern WebAssembly engine and evaluate its performance on a number of applications which we can now compile with mostly trivial effort.Comment: 12 pages, 8 figure

    Digital Forensic Acquisition of Virtual Private Servers Hosted in Cloud Providers that Use KVM as a Hypervisor

    Get PDF
    Kernel-based Virtual Machine (KVM) is one of the most popular hypervisors used by cloud providers to offer virtual private servers (VPSs) to their customers. A VPS is just a virtual machine (VM) hired and controlled by a customer but hosted in the cloud provider infrastructure. In spite of the fact that the usage of VPS is popular and will continue to grow in the future, it is rare to fnd technical publications in the digital forensic feld related to the acquisition process of a VPS involved in a crime. For this research, four VMs were created in a KVM virtualization node, simulating four independent VPSs and running different operating systems and applications. The utilities virsh and tcpdump were used at the hypervisor level to collect digital data from the four VPSs. The utility virsh was employed to take snapshots of the hard drives and to create images of the RAM content while the utility tcpdump was employed to capture in real-time the network traffc. The results generated by these utilities were evaluated in terms of effciency, integrity, and completeness. The analysis of these results suggested both utilities were capable of collecting digital data from the VPSs in an effcient manner, respecting the integrity and completeness of the data acquired. Therefore, these tools can be used to acquire forensically-sound digital evidence from a VPS hosted in a cloud provider’s virtualization node that uses KVM as a hypervisor

    Hüperviisorite ja virtuaalmasinate mäluhalduse analüüs

    Get PDF
    The goal of this thesis is to test memory optimization and reclamation tools in the most widely used hypervisors: VMware ESXi, Microsoft Hyper-V, KVM, and Xen. The aim is to measure how much memory could be reclaimed and optimized by different memory management algorithms across hypervisors mentioned above. Dedicated monitoring tools Zabbix and collectd are going to gather the data which will be analyzed. As a result, Hyper-V seems to be the most effective, with ESXi second and KVM falling somewhat behind in the third place. Xen failed to meet specifc criteria (automated memory optimization) which rendered it impractical to include in the testing process
    corecore