1,019 research outputs found
The reconstruction of digital holograms on a computational grid
Digital holography is greatly extending the range ofholography's applications and moving it from the lab into the field: a single CCD or other solid-state sensor can capture any number of holograms while numerical reconstruction within a computer eliminates the need for chemical development and readily allows further processing and visualisation of the holographic image. The steady increase in sensor pixel count leads to the possibilities of larger sample volumes, while smaller-area pixels enable the practical use of digital off-axis holography. However this increase in pixel count also drives a corresponding expansion of the computational effort needed to numerically reconstruct such holograms to an extent where the reconstruction process for a single depth slice takes significantly longer than the capture process for each single hologram. Grid computing - a recent innovation in large-scale distributed processing - provides a convenient means of harnessing significant computing resources in an ad-hoc fashion that might match the field deployment of a holographic instrument. We describe here the reconstruction of digital holograms on a trans-national computational Grid with over 10 000 nodes available at over 100 sites. A simplistic scheme of deployment was found to provide no computational advantage over a single powerful workstation. Based on these experiences we suggest an improved strategy for workflow and job execution for the replay ofdigital holograms on a Grid
Recommended from our members
Replay of digitally-recorded holograms using a computational grid
Since the calculations are independent, each plane within an in-line digital hologram of a particle field can be reconstructed by a separate computer. We investigate strategies to reproduce a complete sample volume as quickly and efficiently as possible using Grid computing. We used part of the EGEE Grid to reconstruct multiple sets of planes in parallel across a wide-area network, and collated the replayed images on a single Storage Element such that a subsequent particle tracking and analysis code might then be run. Although most of the sample volume is generated up to 20 times faster on a Grid, there are some stragglers which cause the reconstruction rate to slow, and a significant proportion of jobs get lost completely, leaving blocks missing from the sample volume. In the light of these experimental findings we propose some strategies for making Grid computing useful in the field of digital hologram reconstruction and analysis
Grid computing for the numerical reconstruction of digital holograms
Digital holography has the potential to greatly extend holography's applications and move it from the lab into the field: a single CCD or other solid-state sensor can capture any number of holograms while numerical reconstruction within a computer eliminates the need for chemical processing and readily allows further processing and visualisation of the holographic image. The steady increase in sensor pixel count and resolution leads to the possibilities of larger sample volumes and of higher spatial resolution sampling, enabling the practical use of digital off-axis holography.
However this increase in pixel count also drives a corresponding expansion of the computational effort needed to numerically reconstruct such holograms to an extent where the reconstruction process for a single depth slice takes significantly longer than the capture process for each single hologram. Grid computing - a recent innovation in largescale distributed processing -provides a convenient means of harnessing significant computing resources in an ad-hoc fashion that might match the field deployment of a holographic instrument.
In this paper we consider the computational needs of digital holography and discuss the deployment of numericals reconstruction software over an existing Grid testbed. The analysis of marine organisms is used as an exemplar for work flow and job execution of in-line digital holography
Recommended from our members
Illumination system for the MICE tracker station assembly QA
Copyright @ 2007 MICEThis document describes the design and preparation of the optical system used to illuminate the scintillating-fibre planes to be used in the MICE Tracker. This illumination test during the tracker station assembly is a part of the quality assurance (QA) scheme. The optical design uses a two-stage approach: first, cylindrical optics are used to focus the round beam from the LED into to a long, thin shape. A mechanical slit is placed here to select an evenly illuminated region, providing it with well-defined edges. The second stage is a set of relay optics which project an image of the slit aperture on to the scintillating-fibre plane. A useful consequence of using relay optics rather than a simple slit close to the fibre plane is that wear or accidental damage to the fibres are avoided when the illumination system is being scanned across
Recommended from our members
A data extraction system for underwater particle holography
Pulsed laser holography is an extremely powerful technique for the study of particle fields as it allows instantaneous, noninvasive high-resolution recording of substantial volumes. By replaying the real image one can obtain the size, shape,
position and - if multiple exposures are made - velocity of every object in the recorded field. Manual analysis of large volumes containing thousands of particles is, however, an enormous and time-consuming task, with operator fatigue an
unpredictable source of errors. Clearly the value of holographic measurements also depends crucially on the quality of the reconstructed image: not only will poor resolution degrade size and shape measurements, but aberrations such as coma and astigmatism can change the perceived centroid of a particle, affecting position and velocity measurements.
For large-scale applications of particle field holography, specifically the in situ recording of marine plankton with 'HoloCam,' we have developed an automated data extraction system that can be readily switched between the in-line and off-axis geometries and provides optimised reconstruction from holograms recorded underwater. As a videocamera is automatically stepped through the 200 by 200 by 1000mm sample volume, image processing and object tracking routines locate and extract particle images for further classification by a separate software module
Recommended from our members
Ambient humidity control for maximising replay intensity and resolution in aberration-compensated off-axis holograms of underwater objects
In hologrammetry it is usually more desirable to reconstruct the real image than the virtual image, since the latter must be viewed at a distance through the window of the holographic plate itself. In applications where the recorded scene was in water but with replay into air it is necessary to correct for the refractive index difference. This can be done by reconstructing the image with shorter wavelength illumination combined with a change in beam angle to satisfy the grating equation, but these changes mean that the Bragg condition may no longer be satisfied during replay, reducing the diffraction efficiency and making the reconstructed images difficult to see. Changing the replay beam angle to better satisfy the Bragg condition makes the images brighter, but also renders them unrecognizable by introducing severe optical aberrations. A possible solution is to alter the Bragg properties of the hologram. In particular, the emulsion thickness can be conveniently controlled by altering the humidity of the atmosphere surrounding the hologram without causing any long-term changes or damage to the holographic plate. The validity of using humidity change to tune the Bragg properties of emulsions during replay has been demonstrated by measuring the brightness and perceived resolution of a reconstructed real image from a hologram over a wide range of humidities. The results have been compared with a simple model based on the Flory-Huggins theory of polymer swelling
Recommended from our members
Assessment of a Computational Grid for the Replay of Digitally-Recorded Holograms
Recommended from our members
Optimising replay intensity and resolution in aberration-compensated
In hologrammetry it is desirable to reconstruct the real image rather than the virtual image as the latter must be viewed at a distance through the window of the holographic plate itself. When a scene is located in water but the image is replayed in air, it is necessary to correct for the refractive index difference by reconstructing the image with shorter wavelength illumination and changing the beam angle to satisfy the grating equation. However this means that the Bragg condition may no longer be satisfied during replay, reducing the diffraction efficiency and decreasing the signal-to-noise ratio of the reconstructed images. Changing the replay beam angle to satisfy better the Bragg condition makes the images brighter but also renders them unusable by increasing the optical aberrations. Our solution is to alter the Bragg properties of the hologram by altering the humidity of the surrounding atmosphere. This approach has been experimentally demonstrated for Agfa 8E56HD emulsions by measuring the brightness and resolution of a reconstructed real image from an off-axis hologram over a humidity range from 6 to 93 percent. The emulsion swelling and its effect on the Bragg properties of the hologram were modelled using the Flory-Huggins theory of polymer swelling
Performance of R-GMA for monitoring grid jobs for CMS data production
High energy physics experiments, such as the Compact Muon Solenoid (CMS) at the CERN laboratory in Geneva, have large-scale data processing requirements, with data accumulating at a rate of 1 Gbyte/s. This load comfortably exceeds any previous processing requirements and we believe it may be most efficiently satisfied through grid computing. Furthermore the production of large quantities of Monte Carlo simulated data provides an ideal test bed for grid technologies and will drive their development. One important challenge when using the grid for data analysis is the ability to monitor transparently the large number of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the grid monitoring architecture of the Global Grid Forum. We have previously developed a system allowing us to test its performance under a heavy load while using few real grid resources. We present the latest results on this system running on the LCG 2 grid test bed using the LCG 2.6.0 middleware release. For a sustained load equivalent to 7 generations of 1000 simultaneous jobs, R-GMA was able to transfer all published messages and store them in a database for 98% of the individual jobs. The failures experienced were at the remote sites, rather than at the archiver's MON box as had been expected
Scalability tests of R-GMA-based grid job monitoring system for CMS Monte Carlo data production
Copyright @ 2004 IEEEHigh-energy physics experiments, such as the compact muon solenoid (CMS) at the large hadron collider (LHC), have large-scale data processing computing requirements. The grid has been chosen as the solution. One important challenge when using the grid for large-scale data processing is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. The relational grid monitoring architecture (R-GMA) is a monitoring and information management service for distributed resources based on the GMA of the Global Grid Forum. We report on the first measurements of R-GMA as part of a monitoring architecture to be used for batch submission of multiple Monte Carlo simulation jobs running on a CMS-specific LHC computing grid test bed. Monitoring information was transferred in real time from remote execution nodes back to the submitting host and stored in a database. In scalability tests, the job submission rates supported by successive releases of R-GMA improved significantly, approaching that expected in full-scale production
- …