5,640 research outputs found

    Scalability tests of R-GMA-based grid job monitoring system for CMS Monte Carlo data production

    Get PDF
    Copyright @ 2004 IEEEHigh-energy physics experiments, such as the compact muon solenoid (CMS) at the large hadron collider (LHC), have large-scale data processing computing requirements. The grid has been chosen as the solution. One important challenge when using the grid for large-scale data processing is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. The relational grid monitoring architecture (R-GMA) is a monitoring and information management service for distributed resources based on the GMA of the Global Grid Forum. We report on the first measurements of R-GMA as part of a monitoring architecture to be used for batch submission of multiple Monte Carlo simulation jobs running on a CMS-specific LHC computing grid test bed. Monitoring information was transferred in real time from remote execution nodes back to the submitting host and stored in a database. In scalability tests, the job submission rates supported by successive releases of R-GMA improved significantly, approaching that expected in full-scale production

    Performance of R-GMA for monitoring grid jobs for CMS data production

    Get PDF
    High energy physics experiments, such as the Compact Muon Solenoid (CMS) at the CERN laboratory in Geneva, have large-scale data processing requirements, with data accumulating at a rate of 1 Gbyte/s. This load comfortably exceeds any previous processing requirements and we believe it may be most efficiently satisfied through grid computing. Furthermore the production of large quantities of Monte Carlo simulated data provides an ideal test bed for grid technologies and will drive their development. One important challenge when using the grid for data analysis is the ability to monitor transparently the large number of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the grid monitoring architecture of the Global Grid Forum. We have previously developed a system allowing us to test its performance under a heavy load while using few real grid resources. We present the latest results on this system running on the LCG 2 grid test bed using the LCG 2.6.0 middleware release. For a sustained load equivalent to 7 generations of 1000 simultaneous jobs, R-GMA was able to transfer all published messages and store them in a database for 98% of the individual jobs. The failures experienced were at the remote sites, rather than at the archiver's MON box as had been expected

    The dynamics of central control and subsidiary autonomy in the management of human resources: case study evidence from US MNCs in the UK

    Get PDF
    This article revisits a central question in the debates on the management of multinationals: the balance between centralized policy-making and subsidiary autonomy. It does so through data from a series of case studies on the management of human resources in American multinationals in the UK. Two strands of debate are confronted. The first is the literature on differences between multinationals of different national origins which has shown that US companies tend to be more centralized, standardized, and formalized in their management of human resources. It is argued that the literature has provided unconvincing explanations of this pattern, failing to link it to distinctive features of the American business system in which US multinationals are embedded. The second strand is the wider debate on the balance between centralization and decentralization in multinationals. It is argued that the literature neglects important features of this balance: the contingent oscillation between centralized and decentralized modes of operation and (relatedly) the way in which the balance is negotiated by organizational actors through micro-political processes whereby the external structural constraints on the company are defined and interpreted. In such negotiation, actors’ leverage often derives from exploiting differences between the national business systems in which the multinational operates

    HEP Applications Evaluation of the EDG Testbed and Middleware

    Full text link
    Workpackage 8 of the European Datagrid project was formed in January 2001 with representatives from the four LHC experiments, and with experiment independent people from five of the six main EDG partners. In September 2002 WP8 was strengthened by the addition of effort from BaBar and D0. The original mandate of WP8 was, following the definition of short- and long-term requirements, to port experiment software to the EDG middleware and testbed environment. A major additional activity has been testing the basic functionality and performance of this environment. This paper reviews experiences and evaluations in the areas of job submission, data management, mass storage handling, information systems and monitoring. It also comments on the problems of remote debugging, the portability of code, and scaling problems with increasing numbers of jobs, sites and nodes. Reference is made to the pioneeering work of Atlas and CMS in integrating the use of the EDG Testbed into their data challenges. A forward look is made to essential software developments within EDG and to the necessary cooperation between EDG and LCG for the LCG prototype due in mid 2003.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics Conference (CHEP03), La Jolla, CA, USA, March 2003, 7 pages. PSN THCT00

    Preliminary results on the performance of a TeO2 thermal detector in a search for direct interactions of WIMPS

    Get PDF
    Abstract During a Double Beta Decay experiment performed at Laboratori Nazionali del Gran Sasso, a 1548 hours background spectrum was collected with a 340 g TeO2 thermal detector. An analysis of this spectrum has been carried out to search for possible WIMP signals. The values for parameters which are essential in the search for WIMPs, like energy resolution (2 keV), energy threshold (13 keV) and nuclear recoil quenching factor (≥ 0.93) have been experimentally determined and are discussed in detail. The spectrum of recoils induced by α decays has been directly observed for the first time in coincidence with the α particle pulse. Preliminary limits on the spin-independent cross sections of WIMPs on Te and O nuclei have been obtained

    The bolometers as nuclear recoil detectors

    Get PDF
    Our group is involved in experiments using bolometric detectors since ten years for rare event searches like double beta decay or Dark Matter interactions. During last year, to check the quenching factor of TeO 2 bolometers, we have measured the nuclear recoils at energy as low as 15 keV in our experimental apparatus at Laboratori Nazionali del Gran Sasso. Two 72g TeO 2 detectors were exposed under vacuum to a 228Ra a source that implanted on them 224Ra nuclei. The nuclei emitted by the implanted source were detected in one bolometer in coincidence with the corresponding a particles in the other. The energy spectrum of the 103.4 keV 224Ra nuclei has been obtained with an energy resolution of about 12 keV. Furthermore an a measurement of Roman lead has exploited also the sensitivity of this technique to check for ultralow activity in matter, taking advantage of the source,detector approach. A limit on the 210Pb contamination in roman lead as low as 4 mBq/Kg has been obtained. ( 1998 Elsevier Science B.V. All rights reserved
    • …
    corecore