391 research outputs found

    Persistent Memory Programming Abstractions in Context of Concurrent Applications

    Full text link
    The advent of non-volatile memory (NVM) technologies like PCM, STT, memristors and Fe-RAM is believed to enhance the system performance by getting rid of the traditional memory hierarchy by reducing the gap between memory and storage. This memory technology is considered to have the performance like that of DRAM and persistence like that of disks. Thus, it would also provide significant performance benefits for big data applications by allowing in-memory processing of large data with the lowest latency to persistence. Leveraging the performance benefits of this memory-centric computing technology through traditional memory programming is not trivial and the challenges aggravate for parallel/concurrent applications. To this end, several programming abstractions have been proposed like NVthreads, Mnemosyne and intel's NVML. However, deciding upon a programming abstraction which is easier to program and at the same time ensures the consistency and balances various software and architectural trade-offs is openly debatable and active area of research for NVM community. We study the NVthreads, Mnemosyne and NVML libraries by building a concurrent and persistent set and open addressed hash-table data structure application. In this process, we explore and report various tradeoffs and hidden costs involved in building concurrent applications for persistence in terms of achieving efficiency, consistency and ease of programming with these NVM programming abstractions. Eventually, we evaluate the performance of the set and hash-table data structure applications. We observe that NVML is easiest to program with but is least efficient and Mnemosyne is most performance friendly but involves significant programming efforts to build concurrent and persistent applications.Comment: Accepted in HiPC SRS 201

    Exploring Views on Data Centre Power Consumption and Server Virtualization

    Get PDF
    The primary purpose of this Thesis is to explore views on Green IT/Computing and how it relates to Server Virtualization, in particular for Data Centre IT environments. Our secondary purpose is to explore other important aspects of Server Virtualization, in the same context. The primary research question was to determine if Data Centre (DC) power consumption reduction is related to, or perceived as, a success factor for implementing and deploying server virtualization for consolidation purposes, and if not, what other decision areas affect Server Virtualization and power consumption reduction, respectively. The conclusions from our research are that there is a difference of opinion regarding how to factor power consumption reduction from server equipment, both from promoters and deployers. However, it was a common view that power consumption reduction was usually achieved, but not necessarily considered, and thus not evaluated, as a success factor, nor that actual power consumption was measured or monitored after server virtualization deployment. We found that other factors seemed more important, such as lower cost through higher physical machine utilization, simplified high availability and disaster recovery capabilities

    Ironman: Open Source Containers and Virtualization in bare metal

    Get PDF
    Trabalho de projeto de mestrado, Engenharia Informática (Engenharia de Software) Universidade de Lisboa, Faculdade de Ciências, 2021Computer virtualization has become prevalent throughout the years for both business and personal use. It allows for hosting new machines, on computational resources that are left unused, running as independent computers. Apart from the traditional virtual machines, a more recent form of virtualization was introduced and will be explored in this project, containers, more specifically Linux Containers. While multiple virtualization tools are available, some of them require a premium payment, while others do not support container virtualization. For this project, LXD, an open source virtual instance manager, will be used to manage both virtual machines and containers. For added service availability, clustering support will also be developed. Clustering will enable multiple physical computers to host virtual instances as if they were a single machine. Coupled with the Ceph storage back end it allows for data to be replicated across all computers in the same cluster, enabling instance recovery when a computer from the cluster is faulty. The infrastructure deployment tool Puppet will be used to automate the installation and configuration of an LXD virtualization system for both a clustered and non clustered environment. This allows for simple and automatic physical host configuration limiting the required user input and thus decreasing the possibilities of system misconfiguration. LXD was tested for both environments and ultimately considered an effective virtualization tool, which when configured accordingly can be productized for a production environment

    Virtualization for computational scientists

    Get PDF
    International audienc
    • …
    corecore