159 research outputs found

    Suitability of GPUs for real-time control of large astronomical adaptive optics instruments

    Get PDF
    Adaptive optics (AO) is a technique for correcting aberrations introduced when light propagates through a medium, for example, the light from stars propagating through the turbulent atmosphere. The components of an AO instrument are: (1) a camera to record the aberrations, (2) a corrective mechanism to correct them, (3) a real-time controller (RTC) that processes the camera images and steers the corrective mechanism on milliseconds timescales. We have accelerated the image processing for the AO RTC with the use of graphics processing units (GPUs). It is crucial that the image is processed before the atmospheric turbulence has changed, i.e., in one or two milliseconds. The main task is to transfer the images to the GPU memory with a minimum delay. The key result of this paper is a demonstration that this can be done fast enough using commercial frame grabbers and standard CUDA tools. Our benchmarking image consists of 1.6×1061.6×106 pixels out of which 1.2×1061.2×106 are used in processing. The images are characterized and reduced into a set of 9248 numbers; about one-third of the total processing time is spent on this characterization. This set of numbers is then used to calculate the commands for the corrective system, which takes about two-third of the total time. The processing rate achieved on a single GPU is about 700 frames per second (fps). This increases to 1100 fps (1565 fps) if we use two (four) GPUs. The variation in processing time (jitter) has a root-mean-square value of 20–30 μμ s and about one outlier in a million cycles

    A Prototype Adaptive Optics Real-Time Control Architecture for Extremely Large Telescopes using Many-Core CPUs

    Get PDF
    A proposed solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control (RTC) using many-core CPU technologies is presented. Due to the nearly 4x increase in primary aperture diameter the next generation of 30-40m class ELTs will require much greater computational power than the current 10m class of telescopes. The computational demands of AO RTC scale to the fourth power of telescope diameter to maintain the spatial sampling required for adequate atmospheric correction. The Intel Xeon Phi is a standard socketed CPU processor which combines many (450GB/s) on-chip high bandwidth memory, properties which are perfectly suited to the highly parallelisable and memory bandwidth intensive workloads of ELT-scale AO RTC. Performance of CPU-based RTC software is analysed and compared for the single conjugate, multi conjugate and laser tomographic types of AO operating on the Xeon Phi and other many-core CPU solutions. This report concludes with an investigation into the potential performance of the CPU-based AO RTC software for the proposed instruments of the next generation Extremely Large Telescope (ELT) and the Thirty Meter Telescope (TMT) and also for some high order AO systems at current observatories

    A real-time simulation facility for astronomical adaptive optics

    Get PDF
    In this paper we introduce the concept of real-time hardware-in-the-loop simulation for astronomical adaptive optics, and present the case for the requirement for such a facility. This real-time simulation, when linked with an adaptive optics real-time control system, provides an essential tool for the validation, verification and integration of the Extremely Large Telescope real-time control systems prior to commissioning at the telescope. We demonstrate that such a facility is crucial for the success of the future extremely large telescopes

    Reducing adaptive optics latency using many-core processors

    Get PDF
    Atmospheric turbulence reduces the achievable resolution of ground based optical telescopes. Adaptive optics systems attempt to mitigate the impact of this turbulence and are required to update their corrections quickly and deterministically (i.e. in realtime). The technological challenges faced by the future extremely large telescopes (ELTs) and their associated instruments are considerable. A simple extrapolation of current systems to the ELT scale is not sufficient. My thesis work consisted in the identification and examination of new many-core technologies for accelerating the adaptive optics real-time control loop. I investigated the Mellanox TILE-Gx36 and the Intel Xeon Phi (5110p). The TILE-Gx36 with 4x10 GbE ports and 36 processing cores is a good candidate for fast computation of the wavefront sensor images. The Intel Xeon Phi with 60 processing cores and high memory bandwidth is particularly well suited for the acceleration of the wavefront reconstruction. Through extensive testing I have shown that the TILE-Gx can provide the performance required for the wavefront processing units of the ELT first light instruments. The Intel Xeon Phi (Knights Corner) while providing good overall performance does not have the required determinism. We believe that the next generation of Xeon Phi (Knights Landing) will provide the necessary determinism and increased performance. In this thesis, we show that by using currently available novel many-core processors it is possible to reach the performance required for ELT instruments

    Hybrid propagation physics for the design and modeling of astronomical observatories: a coronagraphic example

    Full text link
    For diffraction-limited optical systems an accurate physical optics model is necessary to properly evaluate instrument performance. Astronomical observatories outfitted with coronagraphs for direct exoplanet imaging require physical optics models to simulate the effects of misalignment and diffraction. Accurate knowledge of the observatory's PSF is integral for the design of high-contrast imaging instruments and simulation of astrophysical observations. The state of the art is to model the misalignment, ray aberration, and diffraction across multiple software packages, which complicates the design process. Gaussian Beamlet Decomposition (GBD) is a ray-based method of diffraction calculation that has been widely implemented in commercial optical design software. By performing the coherent calculation with data from the ray model of the observatory, the ray aberration errors can be fed directly into the physical optics model of the coronagraph, enabling a more integrated model of the observatory. We develop a formal algorithm for the transfer-matrix method of GBD, and evaluate it against analytical results and a traditional physical optics model to assess the suitability of GBD for high-contrast imaging simulations. Our GBD simulations of the observatory PSF, when compared to the analytical Airy function, have a sum-normalized RMS difference of ~10^-6. These fields are then propagated through a Fraunhofer model of a exoplanet imaging coronagraph where the mean residual numerical contrast is 4x10^-11, with a maximum near the inner working angle at 5x10^-9. These results show considerable promise for the future development of GBD as a viable propagation technique in high-contrast imaging. We developed this algorithm in an open-source software package and outlined a path for its continued development to increase the fidelity and flexibility of diffraction simulations using GBD.Comment: 58 pages, 15 figures, preprint version for article in press. Accepted to SPIE's Journal of Astronomical Telescopes, Instruments, and Systems on October 23 202

    Reducing adaptive optics latency using Xeon Phi many-core processors

    Get PDF

    Poke: An open-source ray-based physical optics platform

    Full text link
    Integrated optical models allow for accurate prediction of the as-built performance of an optical instrument. Optical models are typically composed of a separate ray trace and diffraction model to capture both the geometrical and physical regimes of light. These models are typically separated across both open-source and commercial software that don't interface with each other directly. To bridge the gap between ray trace models and diffraction models, we have built an open-source optical analysis platform in Python called Poke that uses commercial ray tracing APIs and open-source physical optics engines to simultaneously model scalar wavefront error, diffraction, and polarization. Poke operates by storing ray data from a commercial ray tracing engine into a Python object, from which physical optics calculations can be made. We present an introduction to using Poke, and highlight the capabilities of two new propagation physics modules that add to the utility of existing scalar diffraction models. Gaussian Beamlet Decomposition is a ray-based approach to diffraction modeling that allows us to integrate physical optics models with ray trace models to directly capture the influence of ray aberrations in diffraction simulations. Polarization Ray Tracing is a ray-based method of vector field propagation that can diagnose the polarization aberrations in optical systems. Poke has been recently used to study the next generation of astronomical observatories, including the ground-based Extremely Large Telescopes and a 6 meter space telescope early concept for NASA's Habitable Worlds Observatory.Comment: 11 Pages, 9 Figures, Published in Proceedings of SPIE Optical Modeling and Performance Predictions XIII Paper 12664-

    Development of advanced control strategies for Adaptive Optics systems

    Get PDF
    Atmospheric turbulence is a fast disturbance that requires high control frequency. At the same time, celestial objects are faint sources of light and thus WFSs often work in a low photon count regime. These two conditions require a trade-off between high closed-loop control frequency to improve the disturbance rejection performance, and large WFS exposure time to gather enough photons for the integrated signal to increase the Signal-to-Noise ratio (SNR), making the control a delicate yet fundamental aspect for AO systems. The AO plant and atmospheric turbulence were formalized as state-space linear time-invariant systems. The full AO system model is the ground upon which a model-based control can be designed. A Shack-Hartmann wavefront sensor was used to measure the horizontal atmospheric turbulence. The experimental measurements yielded to the Cn2 atmospheric structure parameter, which is key to describe the turbulence statistics, and the Zernike terms time-series. Experimental validation shows that the centroid extraction algorithm implemented on the Jetson GPU outperforms (i.e. is faster) than the CPU implementation on the same hardware. In fact, due to the construction of the Shack-Hartmann wavefront sensor, the intensity image captured from its camera is partitioned into several sub-images, each related to a point of the incoming wavefront. Such sub-images are independent each-other and can be computed concurrently. The AO model is exploited to automatically design an advanced linear-quadratic Gaussian controller with integral action. Experimental evidence shows that the system augmentation approach outperforms the simple integrator and the integrator filtered with the Kalman predictor, and that it requires less parameters to tune

    The Adaptive Optics Lucky Imager: combining adaptive optics and lucky imaging

    Get PDF
    One of the highest resolution astronomical images ever taken in the visible were obtained by combining the techniques of adaptive optics and lucky imaging. The Adaptive Optics Lucky Imager (AOLI), being developed at Cambridge as part of a European collaboration, combines these two techniques in a dedicated instrument for the first time. The instrument is designed initially for use on the 4.2m William Herschel Telescope (WHT) on the Canary Island of La Palma. This thesis describes the development of AOLI, in particular the adaptive optics system and a new type of wavefront sensor, the non-linear curvature wavefront sensor (nlCWFS), being used within the instrument. The development of the nlCWFS has been the focus of my work, bringing the technique from a theoretical concept to physical realisation at the WHT in September 2013. The non-linear curvature wavefront sensor is based on the technique employed in the conventional curvature wavefront sensor where two image planes are located equidistant either side of a pupil plane. Two pairs of images are employed in the nlCWFS providing increased sensitivity to both high- and low- order wavefront distortions. This sensitivity is the reason the nlCWFS was selected for use with AOLI as it will provide significant sky-coverage using natural guide stars alone, mitigating the need for laser guide stars. This thesis is structured into three main sections; the first introduces the non-linear curvature wavefront sensor, the relevant background and a discussion of simulations undertaken to investigate intrinsic effects. The iterative reconstruction algorithm required for wavefront reconstruction is also introduced. The second section discusses the practical implementation of the nlCWFS using two demonstration systems as the precursor to the optical design used at the WHT and includes details of subsequent design changes. The final section discusses data from both the WHT and a laboratory setup developed at Cambridge following the observing run. The long-term goal for AOLI is to undertake science observations on the 10.4m Gran Telescopio Canarias, the world's largest optical telescope. The combination of AO and lucky imaging, when used on this telescope, will provide resolutions a factor of two higher than ever before achieved at visible wavelengths. This offers the opportunity to probe the Cosmos in unprecedented detail and has the potential to significantly advance our understanding of the Universe
    • …
    corecore