9,199 research outputs found
Recent development and perspectives of machines for lattice QCD
I highlight recent progress in cluster computer technology and assess status
and prospects of cluster computers for lattice QCD with respect to the
development of QCDOC and apeNEXT. Taking the LatFor test case, I specify a
512-processor QCD-cluster better than 1$/Mflops.Comment: 14 pages, 17 figures, Lattice2003(plenary
High availability using virtualization
High availability has always been one of the main problems for a data center.
Till now high availability was achieved by host per host redundancy, a highly
expensive method in terms of hardware and human costs. A new approach to the
problem can be offered by virtualization. Using virtualization, it is possible
to achieve a redundancy system for all the services running on a data center.
This new approach to high availability allows to share the running virtual
machines over the servers up and running, by exploiting the features of the
virtualization layer: start, stop and move virtual machines between physical
hosts. The system (3RC) is based on a finite state machine with hysteresis,
providing the possibility to restart each virtual machine over any physical
host, or reinstall it from scratch. A complete infrastructure has been
developed to install operating system and middleware in a few minutes. To
virtualize the main servers of a data center, a new procedure has been
developed to migrate physical to virtual hosts. The whole Grid data center
SNS-PISA is running at the moment in virtual environment under the high
availability system. As extension of the 3RC architecture, several storage
solutions have been tested to store and centralize all the virtual disks, from
NAS to SAN, to grant data safety and access from everywhere. Exploiting
virtualization and ability to automatically reinstall a host, we provide a sort
of host on-demand, where the action on a virtual machine is performed only when
a disaster occurs.Comment: PhD Thesis in Information Technology Engineering: Electronics,
Computer Science, Telecommunications, pp. 94, University of Pisa [Italy
Operating System Noise in the Linux Kernel
As modern network infrastructure moves from hardware-based to software-based using Network Function Virtualization, a new set of requirements is raised for operating system developers. By using the real-time kernel options and advanced CPU isolation features common to the HPC use-cases, Linux is becoming a central building block for this new architecture that aims to enable a new set of low latency networked services. Tuning Linux for these applications is not an easy task, as it requires a deep understanding of the Linux execution model and the mix of user-space tooling and tracing features. This paper discusses the internal aspects of Linux that influence the Operating System Noise from a timing perspective. It also presents Linux’s osnoise tracer, an in-kernel tracer that enables the measurement of the Operating System Noise as observed by a workload, and the tracing of the sources of the noise, in an integrated manner, facilitating the analysis and debugging of the system. Finally, this paper presents a series of experiments demonstrating both Linux’s ability to deliver low OS noise (in the single-digit μs order), and the ability of the proposed tool to provide precise information about root-cause of timing-related OS noise problems
MicroTCA implementation of synchronous Ethernet-Based DAQ systems for large scale experiments
Large LAr TPCs are among the most powerful detectors to address open problems
in particle and astro-particle physics, such as CP violation in leptonic
sector, neutrino properties and their astrophysical implications, proton decay
search etc. The scale of such detector implies severe constraints on their
readout and DAQ system. In this article we describe a data acquisition scheme
for this new generation of large detectors. The main challenge is to propose a
scalable and easy to use solution able to manage a large number of channels at
the lowest cost. It is interesting to note that these constraints are very
similar to those existing in Network Telecommunication Industry. We propose to
study how emerging technologies like ATCA and TCA could be used in
neutrino experiments. We describe the design of an Advanced Mezzanine Board
(AMC) including 32 ADC channels. This board receives 32 analogical channels at
the front panel and sends the formatted data through the TCA backplane
using a Gigabit Ethernet link. The gigabit switch of the MCH is used to
centralize and to send the data to the event building computer. The core of
this card is a FPGA (ARIA-GX from ALTERA) including the whole system except the
memories. A hardware accelerator has been implemented using a NIOS II P
and a Gigabit MAC IP. Obviously, in order to be able to reconstruct the tracks
from the events a time synchronisation system is mandatory. We decided to
implement the IEEE1588 standard also called Precision Timing Protocol, another
emerging and promising technology in Telecommunication Industry. In this
article we describe a Gigabit PTP implementation using the recovered clock of
the gigabit link. By doing so the drift is directly cancelled and the PTP will
be used only to evaluate and to correct the offset.Comment: Talk presented at the 2009 Real Time Conference, Beijing, May '09,
submitted to the proceeding
04451 Abstracts Collection -- Future Generation Grids
The Dagstuhl Seminar 04451 "Future Generation Grid" was held in the International
Conference and Research Center (IBFI), Schloss Dagstuhl from 1st
to 5th November 2004. The focus of the seminar was on open problems and
future challenges in the design of next generation Grid systems. A total of 45
participants presented their current projects, research plans, and new ideas in
the area of Grid technologies. Several evening sessions with vivid discussions
on future trends complemented the talks. This report gives an overview of the
background and the findings of the seminar
National innovation systems, developing countries, and the role of intermediaries: a critical review of the literature
Developed over the past three decades, the national innovation system concept (NIS) has been widely used by both scholars and policy makers to explain how interactions between a set of distinct, nationally bounded institutions supports and facilitates technological change and the emergence and diffusion of new innovations. This concept provides a framework by which developing countries can adopt for purposes of catching up. Initially conceived on structures and interactions identified in economically advanced countries, the application of the NIS concept to developing countries has been gradual and has coincided – in the NIS literature – with a move away from overly macro-interpretations to an emphasis on micro-level interactions and processes, with much of this work questioning the nation state as the most appropriate level of analysis, as well as the emergence of certain intermediary actors thought to facilitate knowledge exchange between actors and institutions. This paper reviews the NIS literature chronologically, showing how this shift in emphasis has diminished somewhat the importance of both institutions, particularly governments, and the process of institutional capacity building. In doing so, the paper suggests that more recent literature on intermediaries such as industry associations may offer valuable insights to how institutional capacity building occurs and how it might be directed, particularly in the context of developing countries where governance capacities are often lacking, contributing to less effective innovation systems, stagnant economies, and unequal development
- …