6,561 research outputs found
Recommended from our members
GRIDCC - Providing a real-time grid for distributed instrumentation
The GRIDCC project is extending the use of Grid computing to include access to and control of distributed instrumentation.
Access to the instruments will be via an interface to a Virtual Instrument Grid Service (VIGS). VIGS is a new concept and its design and implementation, together
with middleware that can provide the appropriate Quality of Service (QoS), is a key part of the GRIDCC development plan. An overall architecture for GRIDCC has been
defined and some of the application areas, which include distributed power systems, remote control of an accelerator and the remote monitoring of a large particle physics
experiment, are briefly discussed.E
Running a Production Grid Site at the London e-Science Centre
This paper describes how the London e-Science Centre cluster MARS, a production 400+ Opteron CPU cluster, was integrated into the production Large Hadron Collider Compute Grid. It describes the practical issues that we encountered when deploying and maintaining this system, and details the techniques that were applied to resolve them. Finally, we provide a set of recommendations based on our experiences for grid software development in general that we believe would make the technology more accessible. © 2006 IEEE
Recommended from our members
From symbols to icons: the return of resemblance in the cognitive neuroscience revolution
We argue that one important aspect of the "cognitive neuroscience revolution" identified by Boone and Piccinini (2015) is a dramatic shift away from thinking of cognitive representations as arbitrary symbols towards thinking of them as icons that replicate structural characteristics of their targets. We argue that this shift has been driven both "from below" and "from above" - that is, from a greater appreciation of what mechanistic explanation of information-processing systems involves ("from below"), and from a greater appreciation of the problems solved by bio-cognitive systems, chiefly regulation and prediction ("from above"). We illustrate these arguments by reference to examples from cognitive neuroscience, principally representational similarity analysis and the emergence of (predictive) dynamical models as a central postulate in neurocognitive research
Recommended from our members
GRIDCC: Real-time workflow system
The Grid is a concept which allows the sharing of resources between distributed communities, allowing each to progress towards potentially different goals. As adoption of the Grid increases so are the activities that people wish to conduct through it. The GRIDCC project is a European Union funded project addressing the issues of integrating instruments into the Grid. This increases the requirement of workflows and Quality of Service upon these workflows as many of these instruments have real-time requirements. In this paper we present the workflow management service within the GRIDCC project which is tasked with optimising the workflows and ensuring that they meet the pre-defined QoS requirements specified upon them
Scalability tests of R-GMA-based grid job monitoring system for CMS Monte Carlo data production
Copyright @ 2004 IEEEHigh-energy physics experiments, such as the compact muon solenoid (CMS) at the large hadron collider (LHC), have large-scale data processing computing requirements. The grid has been chosen as the solution. One important challenge when using the grid for large-scale data processing is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. The relational grid monitoring architecture (R-GMA) is a monitoring and information management service for distributed resources based on the GMA of the Global Grid Forum. We report on the first measurements of R-GMA as part of a monitoring architecture to be used for batch submission of multiple Monte Carlo simulation jobs running on a CMS-specific LHC computing grid test bed. Monitoring information was transferred in real time from remote execution nodes back to the submitting host and stored in a database. In scalability tests, the job submission rates supported by successive releases of R-GMA improved significantly, approaching that expected in full-scale production
Performance of R-GMA for monitoring grid jobs for CMS data production
High energy physics experiments, such as the Compact Muon Solenoid (CMS) at the CERN laboratory in Geneva, have large-scale data processing requirements, with data accumulating at a rate of 1 Gbyte/s. This load comfortably exceeds any previous processing requirements and we believe it may be most efficiently satisfied through grid computing. Furthermore the production of large quantities of Monte Carlo simulated data provides an ideal test bed for grid technologies and will drive their development. One important challenge when using the grid for data analysis is the ability to monitor transparently the large number of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the grid monitoring architecture of the Global Grid Forum. We have previously developed a system allowing us to test its performance under a heavy load while using few real grid resources. We present the latest results on this system running on the LCG 2 grid test bed using the LCG 2.6.0 middleware release. For a sustained load equivalent to 7 generations of 1000 simultaneous jobs, R-GMA was able to transfer all published messages and store them in a database for 98% of the individual jobs. The failures experienced were at the remote sites, rather than at the archiver's MON box as had been expected
Recommended from our members
CMS requirements
CMS' job monitoring requirements from R-GMA, at "EDG WP3 at the Bear
Frequentist Analysis of the Parameter Space of Minimal Supergravity
We make a frequentist analysis of the parameter space of minimal supergravity
(mSUGRA), in which, as well as the gaugino and scalar soft
supersymmetry-breaking parameters being universal, there is a specific relation
between the trilinear, bilinear and scalar supersymmetry-breaking parameters,
A_0 = B_0 + m_0, and the gravitino mass is fixed by m_{3/2} = m_0. We also
consider a more general model, in which the gravitino mass constraint is
relaxed (the VCMSSM). We combine in the global likelihood function the
experimental constraints from low-energy electroweak precision data, the
anomalous magnetic moment of the muon, the lightest Higgs boson mass M_h, B
physics and the astrophysical cold dark matter density, assuming that the
lightest supersymmetric particle (LSP) is a neutralino. In the VCMSSM, we find
a preference for values of m_{1/2} and m_0 similar to those found previously in
frequentist analyses of the constrained MSSM (CMSSM) and a model with common
non-universal Higgs masses (NUHM1). On the other hand, in mSUGRA we find two
preferred regions: one with larger values of both m_{1/2} and m_0 than in the
VCMSSM, and one with large m_0 but small m_{1/2}. We compare the probabilities
of the frequentist fits in mSUGRA, the VCMSSM, the CMSSM and the NUHM1: the
probability that mSUGRA is consistent with the present data is significantly
less than in the other models. We also discuss the mSUGRA and VCMSSM
predictions for sparticle masses and other observables, identifying potential
signatures at the LHC and elsewhere.Comment: 18 pages 27 figure
- …