174 research outputs found
Diskless Image Management (DIM) for Cluster Administration
Large computing systems have large administration needs. But just as technologies have evolved to take advantage of certain parallelisms of large scale computing, administrating these technologies must evolve to take advantage of the associated operational efficiencies.
Using a straightforward push technology, and scalable to thousands of blades, Diskless Image Management (DIM) allows system administrators to boot, patch, or modify one, several or all distributed images in minutes from a single management console.
DIM was prototyped on the MareNostrum cluster with 2406 blades, but is scalable to 7000 blades. Using IBM JS20 blade technology MareNostrum consists of 172 BladeCenters
A Server Consolidation Solution
Advances in server architecture has enabled corporations the ability to strategically redesign their data centers in order to realign the system infrastructure to business needs. The architectural design of physically and logically consolidating servers into fewer and smaller hardware platforms can reduce data center overhead costs, while adding quality of service. In order for the organization to take advantage of the architectural opportunity a server consolidation project was proposed that utilized blade technology coupled with the virtualization of servers. Physical consolidation reduced the data center facility requirements, while server virtualization reduced the number of required hardware platforms. With the constant threat of outsourcing, coupled with the explosive growth of the organization, the IT managers were challenged to provide increased system services and functionality to a larger user community, while maintaining the same head count. A means of reducing overhead costs associated with the in-house data center was to reduce the required facility and hardware resources. The reduction in the data center footprint required less real estate, electricity, fire suppression infrastructure, and HVAC utilities. In addition, since the numerous stand alone servers were consolidated onto a standard platform system administration became more agile to business opportunities.
Implementation of the MR tractography visualization kit based on the anisotropic Allen-Cahn equation
summary:Magnetic Resonance Diffusion Tensor Imaging (MR–DTI) is a noninvasive in vivo method capable of examining the structure of human brain, providing information about the position and orientation of the neural tracts. After a short introduction to the principles of MR–DTI, this paper describes the steps of the proposed neural tract visualization technique based on the DTI data. The cornerstone of the algorithm is a texture diffusion procedure modeled mathematically by the problem for the Allen–Cahn equation with diffusion anisotropy controlled by a tensor field. Focus is put on the issues of the numerical solution of the given problem, using the finite volume method for spatial domain discretization. Several numerical schemes are compared with the aim of reducing the artificial (numerical) isotropic diffusion. The remaining steps of the algorithm are commented on as well, including the acquisition of the tensor field before the actual computation begins and the postprocessing used to obtain the final images. Finally, the visualization results are presented
A Framework for Virtual Device Driver Development and Virtual Device-Based Performance Modeling
Operating system virtualization tools such as VMWare, XEN, and Linux KVM export only minimally capable SVGA graphics adapters. This paper describes the design and implementation of system that virtualizes high-performance graphics cards of arbitrary design to support the construction of authentic device drivers. Drivers written for the virtual cards can be used verbatim, without special function calls or kernel modifications, as drivers for real cards, should real cards of the same design exist. While this allows for arbitrary design, it is not able to model performance characteristics. We describe a new kernel system that allows for arbitrarily changing the performance of a device. These virtual performance throttles (VPTs) use the framework provided by the virtual device architecture and a simple linear service model a physical drive to simulate the relative performance characteristics of the physical disk. The applications of the system include instruction in device driver and disk scheduler design, allowing device driver design to proceed in parallel with new hardware development, and for relative performance measurements without needing access to the physical device being modeled
Recommended from our members
On-chip micro-evaporation: Experimental evaluation of liquid pumping and vapor compression cooling systems
This paper was presented at the 3rd Micro and Nano Flows Conference (MNF2011), which was held at the Makedonia Palace Hotel, Thessaloniki in Greece. The conference was organised by Brunel University and supported by the Italian Union of Thermofluiddynamics, Aristotle University of Thessaloniki, University of Thessaly, IPEM, the Process Intensification Network, the Institution of Mechanical Engineers, the Heat Transfer Society, HEXAG - the Heat Exchange Action Group, and the Energy Institute.Thermal designers of data centers and server manufacturers are showing a great concern regarding the cooling of new generation data centers, which are more compact and dissipate more power than is currently
possible to cool by conventional air conditioning systems. With very large data centers exceeding 100 000 servers,
some consume more than 50 MW [1] of electrical energy to operate, energy which is directly converted to heat and then simply wasted as it is dissipated into the atmosphere. A potentially significantly better solution would be to make use of on-chip two-phase cooling [2], which, besides improving the cooling performance at the chip level, also adds the capability to reuse the waste heat in a convenient manner, since higher evaporating and condensing
temperatures of the two-phase cooling system (from 60-95°C) are possible with such a new green cooling technology. In the present project, two such two-phase cooling cycles using micro-evaporation technology were
experimentally evaluated with specific attention being paid to energy consumption, overall exergetic efficiency and controllability. The main difference between the two cooling cycles is the driver, where both a mini-compressor and a gear pump were considered. The former has the advantage due to its appeal of energy recovery since its exergy potential is higher and the waste heat is exported at a higher temperature for reuse.This study is supported by: the Swiss Commission for Technology and Innovation (CTI) contract number 6862.2; the LTCM laboratory; IBM ZĂĽrich Research
Laboratory (Switzerland) and Embraco (Brazil)
Exporting IP flows using IPFIX
Todays computer networks are continuously expanding both in size and capacity
to accommodate the demands of the traffic they are designed to handle.
Depending on the needs of the network operator, different aspects of this traffic
needs to be measured and analyzed. Processing the full amount of data on
the network would be a daunting task, and to avoid this only certain statistics
describing the individual packets are collected. This data is then aggregated
into ”flows”, based on criteria from the network operator. IPFIX is a recent
IETF effort to standardize a protocol for exporting such flows to a central node
for analyzation. But to effectively utilize a system implementing this protocol,
one needs to know the impact of the protocol itself on the underlying network
and consequently the traffic that flows through it.
This document will explore the performance, capabilities and limitations
of the IPFIX protocol. A packet-capture system utilizing the IPFIX protocol
for flow export, will be set up in a controlled environment, and traffic will be
generated in a predictable manner. Measurements indicate IPFIX to be a fairly
flexible protocol for exporting various traffic characteristics, but that it also has
scalability issues when deployed in larger, high-capacity networks.Master i nettverks- og systemadministrasjo
The IST Cluster: an integrated infrastructure for parallel applications in Physics and Engineering
WOS:000283531600008 (Nº de Acesso Web of Science)The infrastructure to support advanced computing applications at Instituto Superior T´ecnico is presented, including a detailed description of the hardware, system software, and benchmarks, which show an HPL performance of 1.6 Tflops. Due to its decentralized administrative basis, a discussion of the usage policy and administration is also given. The in-house codes running in production are also presented
- …