20 research outputs found

    Single system image: A survey

    Get PDF
    Single system image is a computing paradigm where a number of distributed computing resources are aggregated and presented via an interface that maintains the illusion of interaction with a single system. This approach encompasses decades of research using a broad variety of techniques at varying levels of abstraction, from custom hardware and distributed hypervisors to specialized operating system kernels and user-level tools. Existing classification schemes for SSI technologies are reviewed, and an updated classification scheme is proposed. A survey of implementation techniques is provided along with relevant examples. Notable deployments are examined and insights gained from hands-on experience are summarized. Issues affecting the adoption of kernel-level SSI are identified and discussed in the context of technology adoption literature

    Virtualization services: scalable methods for virtualizing multicore systems

    Get PDF
    Multi-core technology is bringing parallel processing capabilities from servers to laptops and even handheld devices. At the same time, platform support for system virtualization is making it easier to consolidate server and client resources, when and as needed by applications. This consolidation is achieved by dynamically mapping the virtual machines on which applications run to underlying physical machines and their processing cores. Low cost processor and I/O virtualization methods efficiently scaled to different numbers of processing cores and I/O devices are key enablers of such consolidation. This dissertation develops and evaluates new methods for scaling virtualization functionality to multi-core and future many-core systems. Specifically, it re-architects virtualization functionality to improve scalability and better exploit multi-core system resources. Results from this work include a self-virtualized I/O abstraction, which virtualizes I/O so as to flexibly use different platforms' processing and I/O resources. Flexibility affords improved performance and resource usage and most importantly, better scalability than that offered by current I/O virtualization solutions. Further, by describing system virtualization as a service provided to virtual machines and the underlying computing platform, this service can be enhanced to provide new and innovative functionality. For example, a virtual device may provide obfuscated data to guest operating systems to maintain data privacy; it could mask differences in device APIs or properties to deal with heterogeneous underlying resources; or it could control access to data based on the ``trust' properties of the guest VM. This thesis demonstrates that extended virtualization services are superior to existing operating system or user-level implementations of such functionality, for multiple reasons. First, this solution technique makes more efficient use of key performance-limiting resource in multi-core systems, which are memory and I/O bandwidth. Second, this solution technique better exploits the parallelism inherent in multi-core architectures and exhibits good scalability properties, in part because at the hypervisor level, there is greater control in precisely which and how resources are used to realize extended virtualization services. Improved control over resource usage makes it possible to provide value-added functionalities for both guest VMs and the platform. Specific instances of virtualization services described in this thesis are the network virtualization service that exploits heterogeneous processing cores, a storage virtualization service that provides location transparent access to block devices by extending the functionality provided by network virtualization service, a multimedia virtualization service that allows efficient media device sharing based on semantic information, and an object-based storage service with enhanced access control.Ph.D.Committee Chair: Schwan, Karsten; Committee Member: Ahamad, Mustaq; Committee Member: Fujimoto, Richard; Committee Member: Gavrilovska, Ada; Committee Member: Owen, Henry; Committee Member: Xenidis, Jim

    The Fifth Workshop on HPC Best Practices: File Systems and Archives

    Full text link
    The workshop on High Performance Computing (HPC) Best Practices on File Systems and Archives was the fifth in a series sponsored jointly by the Department Of Energy (DOE) Office of Science and DOE National Nuclear Security Administration. The workshop gathered technical and management experts for operations of HPC file systems and archives from around the world. Attendees identified and discussed best practices in use at their facilities, and documented findings for the DOE and HPC community in this report

    Providing support to uncovering I/O usage in HPC platforms

    Get PDF
    High-Performance Computing (HPC) platforms are required to solve the most diverse large-scale scientific problems in various research areas, such as biology, chemistry, physics, and health sciences. Researchers use a multitude of scientific software, which have dif ferent requirements. These include input and output operations, directly impacting per formance because the existing difference in processing and data access speeds. Thus, supercomputers must efficiently handle a mixed workload when storing data from the ap plications. Understanding the set of applications and their performance running in a super computer is paramount to understanding the storage system’s usage, pinpointing possible bottlenecks, and guiding optimization techniques. This research proposes a methodology and visualization tool to evaluate a supercomputer’s data storage infrastructure’s perfor mance, taking into account the diverse workload and demands of the system over a long period of operation. We used the Santos Dumont supercomputer as a study case. With our methodology’s help, we identified inefficient usage and problematic performance factors, such as: (I) the system received an enormous amount of inefficient read operations, below 100 KiB for 75% of the time; (II) imbalance among storage resources, where the overload can correspond to 3× the average load; and (III) high demand for metadata operations, accounting for 60% of all file system operations. We also provide some guidelines on how to tackle those issues.Plataformas de Processamento de Alto Desempenho (PAD) são necessárias para resolver os mais diversos problemas científicos de grande escala em várias áreas de pesquisa, tais como biologia, química, física e ciências da saúde. Pesquisadores utilizam uma infinidade de aplicações científicas, que por sua vez possuem diferentes requisitos. Dentre esses re quisitos estão as operações de entrada e saída, que impactam diretamente o desempenho devido a diferença de velocidade existente entre o processamento e o acesso aos dados. Dessa forma, os supercomputadores devem lidar de forma eficiente com uma carga de trabalho mista ao armazenar os dados utilizados pelas aplicações. O entendimento do conjunto de aplicações e seus desempenhos ao executar em um supercomputador é pri mordial para entender a utilização do sistema de armazenamento, identificando possíveis gargalos, e orientando técnicas de otimização. Essa dissertação propõe uma metodologia e uma ferramenta de visualização para avaliar o desempenho da infraestrutura de arma zenamento de dados de um supercomputador, levando em consideração as demandas e cargas de trabalho diversas do sistema durante um longo período de operação. Como estudo de caso, o supercomputador Santos Dumont foi estudado. Com a ajuda de nossa metodologia, identificamos uso ineficiente e fatores de desempenho problemáticos, como: (I) o sistema recebeu uma enorme quantidade de operações de leitura ineficientes, abaixo de 100 KiB por 75% do tempo; (II) desequilíbrio entre os recursos de armazenamento, onde a sobrecarga pode corresponder a 3× a carga média; e (III) alta demanda por ope rações de metadados, representando 60% de todas as operações do sistema de arquivos. Também fornecemos algumas diretrizes sobre como lidar com esses problemas

    Advanced Simulation and Computing FY12-13 Implementation Plan, Volume 2, Revision 0.5

    Full text link

    Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    Full text link

    Understanding and Optimizing Flash-based Key-value Systems in Data Centers

    Get PDF
    Flash-based key-value systems are widely deployed in today’s data centers for providing high-speed data processing services. These systems deploy flash-friendly data structures, such as slab and Log Structured Merge(LSM) tree, on flash-based Solid State Drives(SSDs) and provide efficient solutions in caching and storage scenarios. With the rapid evolution of data centers, there appear plenty of challenges and opportunities for future optimizations. In this dissertation, we focus on understanding and optimizing flash-based key-value systems from the perspective of workloads, software, and hardware as data centers evolve. We first propose an on-line compression scheme, called SlimCache, considering the unique characteristics of key-value workloads, to virtually enlarge the cache space, increase the hit ratio, and improve the cache performance. Furthermore, to appropriately configure increasingly complex modern key-value data systems, which can have more than 50 parameters with additional hardware and system settings, we quantitatively study and compare five multi-objective optimization methods for auto-tuning the performance of an LSM-tree based key-value store in terms of throughput, the 99th percentile tail latency, convergence time, real-time system throughput, and the iteration process, etc. Last but not least, we conduct an in-depth, comprehensive measurement work on flash-optimized key-value stores with recently emerging 3D XPoint SSDs. We reveal several unexpected bottlenecks in the current key-value store design and present three exemplary case studies to showcase the efficacy of removing these bottlenecks with simple methods on 3D XPoint SSDs. Our experimental results show that our proposed solutions significantly outperform traditional methods. Our study also contributes to providing system implications for auto-tuning the key-value system on flash-based SSDs and optimizing it on revolutionary 3D XPoint based SSDs

    Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    Full text link
    corecore