34,696 research outputs found

    End to end numerical simulations of the MAORY multiconjugate adaptive optics system

    Full text link
    MAORY is the adaptive optics module of the E-ELT that will feed the MICADO imaging camera through a gravity invariant exit port. MAORY has been foreseen to implement MCAO correction through three high order deformable mirrors driven by the reference signals of six Laser Guide Stars (LGSs) feeding as many Shack-Hartmann Wavefront Sensors. A three Natural Guide Stars (NGSs) system will provide the low order correction. We develop a code for the end-to-end simulation of the MAORY adaptive optics (AO) system in order to obtain high-delity modeling of the system performance. It is based on the IDL language and makes extensively uses of the GPUs. Here we present the architecture of the simulation tool and its achieved and expected performance.Comment: 8 pages, 4 figures, presented at SPIE Astronomical Telescopes + Instrumentation 2014 in Montr\'eal, Quebec, Canada, with number 9148-25

    High Performance Direct Gravitational N-body Simulations on Graphics Processing Units

    Get PDF
    We present the results of gravitational direct NN-body simulations using the commercial graphics processing units (GPU) NVIDIA Quadro FX1400 and GeForce 8800GTX, and compare the results with GRAPE-6Af special purpose hardware. The force evaluation of the NN-body problem was implemented in Cg using the GPU directly to speed-up the calculations. The integration of the equations of motions were, running on the host computer, implemented in C using the 4th order predictor-corrector Hermite integrator with block time steps. We find that for a large number of particles (N \apgt 10^4) modern graphics processing units offer an attractive low cost alternative to GRAPE special purpose hardware. A modern GPU continues to give a relatively flat scaling with the number of particles, comparable to that of the GRAPE. Using the same time step criterion the total energy of the NN-body system was conserved better than to one in 10610^6 on the GPU, which is only about an order of magnitude worse than obtained with GRAPE. For N\apgt 10^6 the GeForce 8800GTX was about 20 times faster than the host computer. Though still about an order of magnitude slower than GRAPE, modern GPU's outperform GRAPE in their low cost, long mean time between failure and the much larger onboard memory; the GRAPE-6Af holds at most 256k particles whereas the GeForce 8800GTF can hold 9 million particles in memory.Comment: Submitted to New Astronom

    A Graphical Model to Diagnose Product Defects with Partially Shuffled Equipment Data

    Get PDF
    The diagnosis of product defects is an important task in manufacturing, and machine learning-based approaches have attracted interest from both the industry and academia. A high-quality dataset is necessary to develop a machine learning model, but the manufacturing industry faces several data-collection issues including partially shuffled data, which arises when a product ID is not perfectly inferred and yields an unstable machine learning model. This paper introduces latent variables to formulate a supervised learning model that addresses the problem of partially shuffled data. The experimental results show that our graphical model deals with the shuffling of product order and can detect a defective product far more effectively than a model that ignores shuffling.This work has supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2019R1A2C1088255)

    Graphical Model to Diagnose Product Defects with Partially Shuffled Equipment Data

    Get PDF
    The diagnosis of product defects is an important task in manufacturing, and machine learning-based approaches have attracted interest from both the industry and academia. A high-quality dataset is necessary to develop a machine learning model, but the manufacturing industry faces several data-collection issues including partially shuffled data, which arises when a product ID is not perfectly inferred and yields an unstable machine learning model. This paper introduces latent variables to formulate a supervised learning model that addresses the problem of partially shuffled data. The experimental results show that our graphical model deals with the shuffling of product order and can detect a defective product far more effectively than a model that ignores shuffling.This work has supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2019R1A2C1088255)

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated
    corecore