77 research outputs found
04451 Abstracts Collection -- Future Generation Grids
The Dagstuhl Seminar 04451 "Future Generation Grid" was held in the International
Conference and Research Center (IBFI), Schloss Dagstuhl from 1st
to 5th November 2004. The focus of the seminar was on open problems and
future challenges in the design of next generation Grid systems. A total of 45
participants presented their current projects, research plans, and new ideas in
the area of Grid technologies. Several evening sessions with vivid discussions
on future trends complemented the talks. This report gives an overview of the
background and the findings of the seminar
09191 Abstracts Collection -- Fault Tolerance in High-Performance Computing and Grids
From June 4--8, 2009, the Dagstuhl Seminar 09191 ``Fault Tolerance in High-Performance Computing and Grids \u27\u27 was held
in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available.
Slides of
the talks and abstracts are available online at url{http://www.dagstuhl.de/Materials/index.en.phtml?09191}
AstroGrid-D: Grid Technology for Astronomical Science
We present status and results of AstroGrid-D, a joint effort of
astrophysicists and computer scientists to employ grid technology for
scientific applications. AstroGrid-D provides access to a network of
distributed machines with a set of commands as well as software interfaces. It
allows simple use of computer and storage facilities and to schedule or monitor
compute tasks and data management. It is based on the Globus Toolkit middleware
(GT4). Chapter 1 describes the context which led to the demand for advanced
software solutions in Astrophysics, and we state the goals of the project. We
then present characteristic astrophysical applications that have been
implemented on AstroGrid-D in chapter 2. We describe simulations of different
complexity, compute-intensive calculations running on multiple sites, and
advanced applications for specific scientific purposes, such as a connection to
robotic telescopes. We can show from these examples how grid execution improves
e.g. the scientific workflow. Chapter 3 explains the software tools and
services that we adapted or newly developed. Section 3.1 is focused on the
administrative aspects of the infrastructure, to manage users and monitor
activity. Section 3.2 characterises the central components of our architecture:
The AstroGrid-D information service to collect and store metadata, a file
management system, the data management system, and a job manager for automatic
submission of compute tasks. We summarise the successfully established
infrastructure in chapter 4, concluding with our future plans to establish
AstroGrid-D as a platform of modern e-Astronomy.Comment: 14 pages, 12 figures Subjects: data analysis, image processing,
robotic telescopes, simulations, grid. Accepted for publication in New
Astronom
Autonomic Management of Large Clusters and Their Integration into the Grid
We present a framework for the co-ordinated, autonomic management of multiple clusters in a compute center and their integration into a Grid environment. Site autonomy and the automation of administrative tasks are prime aspects in this framework. The system behavior is continuously monitored in a steering cycle and appropriate actions are taken to resolve any problems. All presented components have been implemented in the course of the EU project DataGrid: The Lemon monitoring components, the FT fault-tolerance mechanism, the quattor system for software installation and configuration, the RMS job and resource management system, and the Gridification scheme that integrates clusters into the Grid
Challenges in QCD matter physics - The Compressed Baryonic Matter experiment at FAIR
Substantial experimental and theoretical efforts worldwide are devoted to
explore the phase diagram of strongly interacting matter. At LHC and top RHIC
energies, QCD matter is studied at very high temperatures and nearly vanishing
net-baryon densities. There is evidence that a Quark-Gluon-Plasma (QGP) was
created at experiments at RHIC and LHC. The transition from the QGP back to the
hadron gas is found to be a smooth cross over. For larger net-baryon densities
and lower temperatures, it is expected that the QCD phase diagram exhibits a
rich structure, such as a first-order phase transition between hadronic and
partonic matter which terminates in a critical point, or exotic phases like
quarkyonic matter. The discovery of these landmarks would be a breakthrough in
our understanding of the strong interaction and is therefore in the focus of
various high-energy heavy-ion research programs. The Compressed Baryonic Matter
(CBM) experiment at FAIR will play a unique role in the exploration of the QCD
phase diagram in the region of high net-baryon densities, because it is
designed to run at unprecedented interaction rates. High-rate operation is the
key prerequisite for high-precision measurements of multi-differential
observables and of rare diagnostic probes which are sensitive to the dense
phase of the nuclear fireball. The goal of the CBM experiment at SIS100
(sqrt(s_NN) = 2.7 - 4.9 GeV) is to discover fundamental properties of QCD
matter: the phase structure at large baryon-chemical potentials (mu_B > 500
MeV), effects of chiral symmetry, and the equation-of-state at high density as
it is expected to occur in the core of neutron stars. In this article, we
review the motivation for and the physics programme of CBM, including
activities before the start of data taking in 2022, in the context of the
worldwide efforts to explore high-density QCD matter.Comment: 15 pages, 11 figures. Published in European Physical Journal
Complete solution of the Eight-Puzzle and the benefit of node-ordering in IDA*
The 8-puzzle is the largest puzzle of its type that can be completely solved. It is simple, and yet obeys a combinatorially large problem space of 9!/2 states. The N x N extension of the 8-puzzle is NP-hard. In the first part of this paper, we present complete statistical data based on an exhaustive evaluation of all possible tile configurations. Our results include data on the expected solution lengths, the `easiest ' and `worst' configurations, and the density and distribution of solution nodes in the search tree. In our second set of experiments, we used the 8-puzzle as a workbench model to evaluate the benefit of node ordering schemes in Iterative-Deepening A* (IDA*. One highlight of our results is that almost all IDA* implementations perform worse than would be possible with a simple random ordering of the operators
- …