925 research outputs found

    Avalanche photodiodes and vacuum phototriodes for the electromagnetic calorimeter of the CMS experiment at the large hadron collider

    Get PDF
    The homogeneous lead tungstate electromagnetic calorimeter for the Compact Muon Solenoid detector at the Large Hadron Collider operates in a challenging radiation environment. The central region of the calorimeter uses large-area avalanche photodiodes to detect the fast blue-violet scintillation light from the crystals. The high hadron fluence in the forward region precludes the use of these photodiodes and vacuum phototriodes are used in this region. The constructional complexity of the calorimeter, which comprises 75848 individual crystals, plus the activation of material make repair during the lifetime of the detector virtually impossible. We describe here the key features and performance of the photodetectors and the quality assurance procedures that were used to ensure that the proportion of photodetectors that fail over the lifetime of CMS will be limited to a fraction of a percent

    The production of radiation tolerant vacuum phototriodes and their HV filters for the compact muon solenoid endcap electromagnetic calorimeter

    Get PDF
    Particle detectors which will operate at the Large Hadron Collider face unprecedented challenges in both the number of active detector elements and in operating without maintenance in a high radiation environment for many years. In the Compact Muon Solenoid (CMS) detector the scintillating crystal electromagnetic calorimeter uses vacuum photodetectors in the endcap where the lifetime neutron and hadron fluence is too high for the silicon avalanche photodiodes used in the barrel. Over 15000 radiation tolerant vacuum phototriodes (VPT) have been now been produced by industry for the endcap calorimeter. The VPT have to operate in an environment which has both a significant lifetime dose (up to 50 kGy) from electrons and gamma rays and a high neutron fluence (up to nearly 10^15 n.cm^−2 for E > 100 keV). This paper discusses the steps taken during both the development and production of the VPT to ensure that the response to the scintillation light from the lead tungstate scintillator will not be significantly degraded during the operational lifetime of the experiment. Data from the quality assurance procedures and radiation induced degradation of complete VPT devices is presented. Other components of the endcap calorimeter are also exposed to a similarly intense radiation field. The quality assurance procedure used to select the passive components (resistors and capacitors) used in the high-voltage filter cards is described

    The reconstruction of digital holograms on a computational grid

    Get PDF
    Digital holography is greatly extending the range ofholography's applications and moving it from the lab into the field: a single CCD or other solid-state sensor can capture any number of holograms while numerical reconstruction within a computer eliminates the need for chemical development and readily allows further processing and visualisation of the holographic image. The steady increase in sensor pixel count leads to the possibilities of larger sample volumes, while smaller-area pixels enable the practical use of digital off-axis holography. However this increase in pixel count also drives a corresponding expansion of the computational effort needed to numerically reconstruct such holograms to an extent where the reconstruction process for a single depth slice takes significantly longer than the capture process for each single hologram. Grid computing - a recent innovation in large-scale distributed processing - provides a convenient means of harnessing significant computing resources in an ad-hoc fashion that might match the field deployment of a holographic instrument. We describe here the reconstruction of digital holograms on a trans-national computational Grid with over 10 000 nodes available at over 100 sites. A simplistic scheme of deployment was found to provide no computational advantage over a single powerful workstation. Based on these experiences we suggest an improved strategy for workflow and job execution for the replay ofdigital holograms on a Grid

    Replay of digitally-recorded holograms using a computational grid

    Get PDF
    Since the calculations are independent, each plane within an in-line digital hologram of a particle field can be reconstructed by a separate computer. We investigate strategies to reproduce a complete sample volume as quickly and efficiently as possible using Grid computing. We used part of the EGEE Grid to reconstruct multiple sets of planes in parallel across a wide-area network, and collated the replayed images on a single Storage Element such that a subsequent particle tracking and analysis code might then be run. Although most of the sample volume is generated up to 20 times faster on a Grid, there are some stragglers which cause the reconstruction rate to slow, and a significant proportion of jobs get lost completely, leaving blocks missing from the sample volume. In the light of these experimental findings we propose some strategies for making Grid computing useful in the field of digital hologram reconstruction and analysis

    Finite aperture Fraunhofer holograms of two co-planar discs

    Get PDF
    In this paper we present a theoretical model describing the real images replayed from finite aperture Fraunhofer holograms of two identical co-planar objects. We have solved numerically the resulting image equations for the case of two circular disc objects, and compare our predictions with experimental measurements from in-line Fraunhofer holograms recorded on silver-halide emulsions. Three measurement criteria for calculating the disc diameters and separation are described, and their errors discussed. It is found experimentally that a criterion based on average intensity results in the smallest errors due to its insensitivity to the effects of coherent noise

    Grid computing for the numerical reconstruction of digital holograms

    Get PDF
    Digital holography has the potential to greatly extend holography's applications and move it from the lab into the field: a single CCD or other solid-state sensor can capture any number of holograms while numerical reconstruction within a computer eliminates the need for chemical processing and readily allows further processing and visualisation of the holographic image. The steady increase in sensor pixel count and resolution leads to the possibilities of larger sample volumes and of higher spatial resolution sampling, enabling the practical use of digital off-axis holography. However this increase in pixel count also drives a corresponding expansion of the computational effort needed to numerically reconstruct such holograms to an extent where the reconstruction process for a single depth slice takes significantly longer than the capture process for each single hologram. Grid computing - a recent innovation in largescale distributed processing -provides a convenient means of harnessing significant computing resources in an ad-hoc fashion that might match the field deployment of a holographic instrument. In this paper we consider the computational needs of digital holography and discuss the deployment of numericals reconstruction software over an existing Grid testbed. The analysis of marine organisms is used as an exemplar for work flow and job execution of in-line digital holography

    Timing performance of a vacuum phototriode

    Get PDF
    The official published version of the article can be found at the link below.The timing performance of a vacuum phototriode (VPT) has recently been simulated using SIMION 3D software [1] to develop an electron-optic model [2]. In this work, more precise treatment of the approximation is detailed and comparison is made with corresponding experimental data. The origin of the signals features is investigated and interpreted, affording a deeper understanding into the operation and timing potential of these devices

    Tree Contraction, Connected Components, Minimum Spanning Trees: a GPU Path to Vertex Fitting

    Get PDF
    Standard parallel computing operations are considered in the context of algorithms for solving 3D graph problems which have applications, e.g., in vertex finding in HEP. Exploiting GPUs for tree-accumulation and graph algorithms is challenging: GPUs offer extreme computational power and high memory-access bandwidth, combined with a model of fine-grained parallelism perhaps not suiting the irregular distribution of linked representations of graph data structures. Achieving data-race free computations may demand serialization through atomic transactions, inevitably producing poor parallel performance. A Minimum Spanning Tree algorithm for GPUs is presented, its implementation discussed, and its efficiency evaluated on GPU and multicore architectures

    Comparison of two-dimensional binned data distributions using the energy test

    Get PDF
    For the purposes of monitoring HEP experiments, comparison is often made between regularly acquired histograms of data and reference histograms which represent the ideal state of the equipment. With the larger experiments now starting up, there is a need for automation of this task since the volume of comparisons would overwhelm human operators. However, the two-dimensional histogram comparison tools currently available in ROOT have noticeable shortcomings. We present a new comparison test for 2D histograms, based on the Energy Test of Aslan and Zech, which provides more decisive discrimination between histograms of data coming from different distributions

    Dynamic holographic interferometry using a Bi12SiO20 photorefractive crystal and monomode optical fibres

    Get PDF
    Dynamic holographic interferometry using polarization preserving opticai fibres as light guides and incorporating a photorefractive Bi12SiO20 (BSO) crystal as the recording medium is described. An experimental investigation of the recording of time average holograms through the diffusion process (employing anisotropic self-diffraction) and the drift process (application of d.c. and a.c. electric fields across the crystal) is also described. The holographic interferometer was optimised to produce holograms with a high diffraction efficiency and a high signal-to-noise ratio. Results are presented on optimising parameters such as the writing beam angle and writing beam intensity ratio. The advantages that can be gained by deploying this holographic interferometer in an industrial environment, where the laser light is guided to the location of the object by means of monomode fibres and images are stored within a photorefractive crystal is described. The holographic interferometer is capable of producing time average and double exposure interferograms of vibrating and deformed objects which can be displayed in real time
    • …
    corecore