37,631 research outputs found

    Application of graphics processing units to search pipelines for gravitational waves from coalescing binaries of compact objects

    Get PDF
    We report a novel application of a graphics processing unit (GPU) for the purpose of accelerating the search pipelines for gravitational waves from coalescing binaries of compact objects. A speed-up of 16-fold in total has been achieved with an NVIDIA GeForce 8800 Ultra GPU card compared with one core of a 2.5 GHz Intel Q9300 central processing unit (CPU). We show that substantial improvements are possible and discuss the reduction in CPU count required for the detection of inspiral sources afforded by the use of GPUs

    A New Strategy for Deep Wide-Field High Resolution Optical Imaging

    Get PDF
    We propose a new strategy for obtaining enhanced resolution (FWHM = 0.12 arcsec) deep optical images over a wide field of view. As is well known, this type of image quality can be obtained in principle simply by fast guiding on a small (D = 1.5m) telescope at a good site, but only for target objects which lie within a limited angular distance of a suitably bright guide star. For high altitude turbulence this 'isokinetic angle' is approximately 1 arcminute. With a 1 degree field say one would need to track and correct the motions of thousands of isokinetic patches, yet there are typically too few sufficiently bright guide stars to provide the necessary guiding information. Our proposed solution to these problems has two novel features. The first is to use orthogonal transfer charge-coupled device (OTCCD) technology to effectively implement a wide field 'rubber focal plane' detector composed of an array of cells which can be guided independently. The second is to combine measured motions of a set of guide stars made with an array of telescopes to provide the extra information needed to fully determine the deflection field. We discuss the performance, feasibility and design constraints on a system which would provide the collecting area equivalent to a single 9m telescope, a 1 degree square field and 0.12 arcsec FWHM image quality.Comment: 46 pages, 22 figures, submitted to PASP, a version with higher resolution images and other supplementary material can be found at http://www.ifa.hawaii.edu/~kaiser/wfhr

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Processing optimization with parallel computing for the J-PET tomography scanner

    Get PDF
    The Jagiellonian-PET (J-PET) collaboration is developing a prototype TOF-PET detector based on long polymer scintillators. This novel approach exploits the excellent time properties of the plastic scintillators, which permit very precise time measurements. The very fast, FPGA-based front-end electronics and the data acquisition system, as well as, low- and high-level reconstruction algorithms were specially developed to be used with the J-PET scanner. The TOF-PET data processing and reconstruction are time and resource demanding operations, especially in case of a large acceptance detector, which works in triggerless data acquisition mode. In this article, we discuss the parallel computing methods applied to optimize the data processing for the J-PET detector. We begin with general concepts of parallel computing and then we discuss several applications of those techniques in the J-PET data processing.Comment: 8 page

    Very-high energy gamma-ray astronomy: A 23-year success story in high-energy astroparticle physics

    Full text link
    Very-high energy (VHE) gamma quanta contribute only a minuscule fraction - below one per million - to the flux of cosmic rays. Nevertheless, being neutral particles they are currently the best "messengers" of processes from the relativistic/ultra-relativistic Universe because they can be extrapolated back to their origin. The window of VHE gamma rays was opened only in 1989 by the Whipple collaboration, reporting the observation of TeV gamma rays from the Crab nebula. After a slow start, this new field of research is now rapidly expanding with the discovery of more than 150 VHE gamma-ray emitting sources. Progress is intimately related with the steady improvement of detectors and rapidly increasing computing power. We give an overview of the early attempts before and around 1989 and the progress after the pioneering work of the Whipple collaboration. The main focus of this article is on the development of experimental techniques for Earth-bound gamma-ray detectors; consequently, more emphasis is given to those experiments that made an initial breakthrough rather than to the successors which often had and have a similar (sometimes even higher) scientific output as the pioneering experiments. The considered energy threshold is about 30 GeV. At lower energies, observations can presently only be performed with balloon or satellite-borne detectors. Irrespective of the stormy experimental progress, the success story could not have been called a success story without a broad scientific output. Therefore we conclude this article with a summary of the scientific rationales and main results achieved over the last two decades.Comment: 45 pages, 38 figures, review prepared for EPJ-H special issue "Cosmic rays, gamma rays and neutrinos: A survey of 100 years of research

    Weak nonlinearities: A new route to optical quantum computation

    Full text link
    Quantum information processing (QIP) offers the promise of being able to do things that we cannot do with conventional technology. Here we present a new route for distributed optical QIP, based on generalized quantum non-demolition measurements, providing a unified approach for quantum communication and computing. Interactions between photons are generated using weak non-linearities and intense laser fields--the use of such fields provides for robust distribution of quantum information. Our approach requires only a practical set of resources, and it uses these very efficiently. Thus it promises to be extremely useful for the first quantum technologies, based on scarce resources. Furthermore, in the longer term this approach provides both options and scalability for efficient many-qubit QIP.Comment: 7 Pages, 4 Figure

    Accelerating Monte Carlo simulations with an NVIDIA® graphics processor

    Get PDF
    Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA® 8800gt graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer

    The ATLAS Trigger System Commissioning and Performance

    Full text link
    The ATLAS trigger has been used very successfully to collect collision data during 2009 and 2010 LHC running at centre of mass energies of 900 GeV, 2.36 TeV, and 7 TeV. This paper presents the ongoing work to commission the ATLAS trigger with proton collisions, including an overview of the performance of the trigger based on extensive online running. We describe how the trigger has evolved with increasing LHC luminosity and give a brief overview of plans for forthcoming LHC running.Comment: Poster at Hadron Collider Physics, Aug 2010, Toronto, Canada 4 pages, 6 figure

    advligorts: The Advanced LIGO Real-Time Digital Control and Data Acquisition System

    Get PDF
    The Advanced LIGO detectors are sophisticated opto-mechanical devices. At the core of their operation is feedback control. The Advanced LIGO project developed a custom digital control and data acquisition system to handle the unique needs of this new breed of astronomical detector. The advligorts is the software component of this system. This highly modular and extensible system has enabled the unprecedented performance of the LIGO instruments, and has been a vital component in the direct detection of gravitational waves
    corecore