168 research outputs found
Recommended from our members
Draft grid storage namespace guidelines
The Grid can provide MICE not only with computing (number-crunching) power, but also with a secure global framework allowing users access to data. Although the focus is usually on the mass of experiment data, the Grid also opens up new possibilities for the storage and sharing of other material within the collaboration.
This document provides an introduction to data storage on the Grid and describes the proposal for the directory structures to be used by MICE when registering data files stored on the Grid within a File Catalogue such as LFC
Recommended from our members
RFC: Data flow from the MICE experiment
This article can be accessed at the link below.This document sketches out the flow of data from the MICE experiment, as I currently understand it. This includes not only illustrating the structure of the data flow, but also setting out a consistent vocabulary with which to describe it. Many aspects of this data flow are either misunderstood by me, currently undecided, not yet implemented, or simply have never been considered before; so feedback is both welcomed and essential.
Background information about job submission and file storage on the Grid can be found in previous MICE Notes and the references therein. In particular the first two sections of Note 247 are meant to provide a gentle introduction to Grid data storage from the MICE perspective, and timid MICE may wish to read those first
Recommended from our members
Notes from data flow workshop
Copyright @ 2009 MICEThis document summarises the discussions at the MICE Data Flow Workshop held at Brunel University on 30th June 2009. Background information about job submission and file storage on the Grid can be found in previous MICE Notes and the references therein. In particular the first two sections of Note 247 are meant to provide a gentle introduction to Grid data storage from the MICE perspective, and timid MICE may wish to read those first. The proposed data flow is described in MICE Note 252
Recommended from our members
Activation in the Vicinity of the MICE Target (SP7)
This document tabulates radiation levels measured in the vicinity of the MICE Target as given in the most recent Radiation Surveys available in the MICE document store
Recommended from our members
The online buffer
Copyright @ 2009 MICEThis is a discussion document regarding the proposed use of the Online Buffer. The Online Buffer is used to store locally the RAW data files created by the Event Builder, before they are uploaded to Castor by the data mover. The files may also be used by the online monitoring and reconstruction activities. At the Trigger-DAQ-Controls Review, the reviewers warned that this three-way activity might saturate the disks, and also that the file uploads to the Grid could conflict with the writing of DAQ data. It was proposed to ameliorate this by splitting the buffer into a set of independent volumes into which the DAQ data would be written on a round-robin basis; outgoing files would meanwhile be read only from one of the other volumes. Further, files being uploaded to the Grid would be staged on the transfer box’ system disk, as the (local) staging process is expected to be more deterministic and easier to control than transfers across the WAN
Recommended from our members
POMPOMs: Cost-efficient polarity sensors for the MICE muon beamline
Copyright @ 2011 The AuthorsThe cooling effect in MICE (Muon Ionisation Cooling Experiment) will be studied with both positive and negative muons, reversing the electrical input to the magnets by physically swapping over the power leads. Ensuring the actual operating polarity of the beamline is correctly recorded is a manual step and at risk of error or omission. We have deployed a simple system for monitoring the operating polarity of the two bending magnets by placing in each dipole bore a Honeywell LOHET-II Hall-effect sensor that operates past saturation at nominal field strengths, and thus return one of two well-defined voltages corresponding to the two possible polarities of the magnet. The environment in the experimental hall is monitored by an AKCP securityProbe 5E system integrated into our EPICS-based controls and monitoring system. We read out the beamline polarity sensors using a voltmeter module, and translate the output voltage into a polarity (or alarm) state within EPICS whence it can be accessed by the operators and stored in the output datastream. Initial tests of the LOHET-II sensors indicate they will still be able to indicate beamline polarity after radiation doses of 900 Gy (Co60)
The reconstruction of digital holograms on a computational grid
Digital holography is greatly extending the range ofholography's applications and moving it from the lab into the field: a single CCD or other solid-state sensor can capture any number of holograms while numerical reconstruction within a computer eliminates the need for chemical development and readily allows further processing and visualisation of the holographic image. The steady increase in sensor pixel count leads to the possibilities of larger sample volumes, while smaller-area pixels enable the practical use of digital off-axis holography. However this increase in pixel count also drives a corresponding expansion of the computational effort needed to numerically reconstruct such holograms to an extent where the reconstruction process for a single depth slice takes significantly longer than the capture process for each single hologram. Grid computing - a recent innovation in large-scale distributed processing - provides a convenient means of harnessing significant computing resources in an ad-hoc fashion that might match the field deployment of a holographic instrument. We describe here the reconstruction of digital holograms on a trans-national computational Grid with over 10 000 nodes available at over 100 sites. A simplistic scheme of deployment was found to provide no computational advantage over a single powerful workstation. Based on these experiences we suggest an improved strategy for workflow and job execution for the replay ofdigital holograms on a Grid
Recommended from our members
Replay of digitally-recorded holograms using a computational grid
Since the calculations are independent, each plane within an in-line digital hologram of a particle field can be reconstructed by a separate computer. We investigate strategies to reproduce a complete sample volume as quickly and efficiently as possible using Grid computing. We used part of the EGEE Grid to reconstruct multiple sets of planes in parallel across a wide-area network, and collated the replayed images on a single Storage Element such that a subsequent particle tracking and analysis code might then be run. Although most of the sample volume is generated up to 20 times faster on a Grid, there are some stragglers which cause the reconstruction rate to slow, and a significant proportion of jobs get lost completely, leaving blocks missing from the sample volume. In the light of these experimental findings we propose some strategies for making Grid computing useful in the field of digital hologram reconstruction and analysis
Challenges in using GPUs for the real-time reconstruction of digital hologram images
This is the pre-print version of the final published paper that is available from the link below.In-line holography has recently made the transition from silver-halide based recording media, with laser reconstruction, to recording with large-area pixel detectors and computer-based reconstruction. This form of holographic imaging is an established technique for the study of fine particulates, such as cloud or fuel droplets, marine plankton and alluvial sediments, and enables a true 3D object field to be recorded at high resolution over a considerable depth.
The move to digital holography promises rapid, if not instantaneous, feedback as it avoids the need for the time-consuming chemical development of plates or film film and a dedicated replay system, but with the growing use of video-rate holographic recording, and the desire to reconstruct fully every frame, the computational challenge becomes considerable. To replay a digital hologram a 2D FFT must be calculated for every depth slice desired in the replayed image volume. A typical hologram of ~100 ÎĽm particles over a depth of a few hundred millimetres will require O(10^3) 2D FFT operations to be performed on a hologram of typically a few million pixels.
In this paper we discuss the technical challenges in converting our existing reconstruction code to make efficient use of NVIDIA CUDA-based GPU cards and show how near real-time video slice reconstruction can be obtained with holograms as large as 4096 by 4096 pixels. Our performance to date for a number of different NVIDIA GPU running under both Linux and Microsoft Windows is presented. The recent availability of GPU on portable computers is discussed and a new code for interactive replay of digital holograms is presented
- …