174 research outputs found

    Diskless Image Management (DIM) for Cluster Administration

    Get PDF
    Large computing systems have large administration needs. But just as technologies have evolved to take advantage of certain parallelisms of large scale computing, administrating these technologies must evolve to take advantage of the associated operational efficiencies. Using a straightforward push technology, and scalable to thousands of blades, Diskless Image Management (DIM) allows system administrators to boot, patch, or modify one, several or all distributed images in minutes from a single management console. DIM was prototyped on the MareNostrum cluster with 2406 blades, but is scalable to 7000 blades. Using IBM JS20 blade technology MareNostrum consists of 172 BladeCenters

    A Server Consolidation Solution

    Get PDF
    Advances in server architecture has enabled corporations the ability to strategically redesign their data centers in order to realign the system infrastructure to business needs. The architectural design of physically and logically consolidating servers into fewer and smaller hardware platforms can reduce data center overhead costs, while adding quality of service. In order for the organization to take advantage of the architectural opportunity a server consolidation project was proposed that utilized blade technology coupled with the virtualization of servers. Physical consolidation reduced the data center facility requirements, while server virtualization reduced the number of required hardware platforms. With the constant threat of outsourcing, coupled with the explosive growth of the organization, the IT managers were challenged to provide increased system services and functionality to a larger user community, while maintaining the same head count. A means of reducing overhead costs associated with the in-house data center was to reduce the required facility and hardware resources. The reduction in the data center footprint required less real estate, electricity, fire suppression infrastructure, and HVAC utilities. In addition, since the numerous stand alone servers were consolidated onto a standard platform system administration became more agile to business opportunities.

    Implementation of the MR tractography visualization kit based on the anisotropic Allen-Cahn equation

    Get PDF
    summary:Magnetic Resonance Diffusion Tensor Imaging (MR–DTI) is a noninvasive in vivo method capable of examining the structure of human brain, providing information about the position and orientation of the neural tracts. After a short introduction to the principles of MR–DTI, this paper describes the steps of the proposed neural tract visualization technique based on the DTI data. The cornerstone of the algorithm is a texture diffusion procedure modeled mathematically by the problem for the Allen–Cahn equation with diffusion anisotropy controlled by a tensor field. Focus is put on the issues of the numerical solution of the given problem, using the finite volume method for spatial domain discretization. Several numerical schemes are compared with the aim of reducing the artificial (numerical) isotropic diffusion. The remaining steps of the algorithm are commented on as well, including the acquisition of the tensor field before the actual computation begins and the postprocessing used to obtain the final images. Finally, the visualization results are presented

    Implementing a Parallel Matrix Factorization Library on the Cell Broadband Engine

    Get PDF

    A Framework for Virtual Device Driver Development and Virtual Device-Based Performance Modeling

    Get PDF
    Operating system virtualization tools such as VMWare, XEN, and Linux KVM export only minimally capable SVGA graphics adapters. This paper describes the design and implementation of system that virtualizes high-performance graphics cards of arbitrary design to support the construction of authentic device drivers. Drivers written for the virtual cards can be used verbatim, without special function calls or kernel modifications, as drivers for real cards, should real cards of the same design exist. While this allows for arbitrary design, it is not able to model performance characteristics. We describe a new kernel system that allows for arbitrarily changing the performance of a device. These virtual performance throttles (VPTs) use the framework provided by the virtual device architecture and a simple linear service model a physical drive to simulate the relative performance characteristics of the physical disk. The applications of the system include instruction in device driver and disk scheduler design, allowing device driver design to proceed in parallel with new hardware development, and for relative performance measurements without needing access to the physical device being modeled

    Exporting IP flows using IPFIX

    Get PDF
    Todays computer networks are continuously expanding both in size and capacity to accommodate the demands of the traffic they are designed to handle. Depending on the needs of the network operator, different aspects of this traffic needs to be measured and analyzed. Processing the full amount of data on the network would be a daunting task, and to avoid this only certain statistics describing the individual packets are collected. This data is then aggregated into ”flows”, based on criteria from the network operator. IPFIX is a recent IETF effort to standardize a protocol for exporting such flows to a central node for analyzation. But to effectively utilize a system implementing this protocol, one needs to know the impact of the protocol itself on the underlying network and consequently the traffic that flows through it. This document will explore the performance, capabilities and limitations of the IPFIX protocol. A packet-capture system utilizing the IPFIX protocol for flow export, will be set up in a controlled environment, and traffic will be generated in a predictable manner. Measurements indicate IPFIX to be a fairly flexible protocol for exporting various traffic characteristics, but that it also has scalability issues when deployed in larger, high-capacity networks.Master i nettverks- og systemadministrasjo

    The IST Cluster: an integrated infrastructure for parallel applications in Physics and Engineering

    Get PDF
    WOS:000283531600008 (Nº de Acesso Web of Science)The infrastructure to support advanced computing applications at Instituto Superior T´ecnico is presented, including a detailed description of the hardware, system software, and benchmarks, which show an HPL performance of 1.6 Tflops. Due to its decentralized administrative basis, a discussion of the usage policy and administration is also given. The in-house codes running in production are also presented
    • …
    corecore