21 research outputs found

    Suitability of GPUs for real-time control of large astronomical adaptive optics instruments

    Get PDF
    Adaptive optics (AO) is a technique for correcting aberrations introduced when light propagates through a medium, for example, the light from stars propagating through the turbulent atmosphere. The components of an AO instrument are: (1) a camera to record the aberrations, (2) a corrective mechanism to correct them, (3) a real-time controller (RTC) that processes the camera images and steers the corrective mechanism on milliseconds timescales. We have accelerated the image processing for the AO RTC with the use of graphics processing units (GPUs). It is crucial that the image is processed before the atmospheric turbulence has changed, i.e., in one or two milliseconds. The main task is to transfer the images to the GPU memory with a minimum delay. The key result of this paper is a demonstration that this can be done fast enough using commercial frame grabbers and standard CUDA tools. Our benchmarking image consists of 1.6×1061.6×106 pixels out of which 1.2×1061.2×106 are used in processing. The images are characterized and reduced into a set of 9248 numbers; about one-third of the total processing time is spent on this characterization. This set of numbers is then used to calculate the commands for the corrective system, which takes about two-third of the total time. The processing rate achieved on a single GPU is about 700 frames per second (fps). This increases to 1100 fps (1565 fps) if we use two (four) GPUs. The variation in processing time (jitter) has a root-mean-square value of 20–30 μμ s and about one outlier in a million cycles

    On-sky results for the integrated microlens ring tip-tilt sensor

    Get PDF
    We present the first on-sky results of the microlens ring tip-tilt sensor. This sensor uses a 3D printed microlens ring feeding six multimode fibers to sense misaligned light, allowing centroid reconstruction. A tip-tilt mirror allows the beam to be corrected, increasing the amount of light coupled into a centrally positioned single-mode (science) fiber. The sensor was tested with the iLocater acquisition camera at the Large Binocular Telescope in Tucson, Arizona, in November 2019. The limit on the maximum achieved rms reconstruction accuracy was found to be 0.19/D in both tip and tilt, of which approximately 50% of the power originates at frequencies below 10 Hz. We show the reconstruction accuracy is highly dependent on the estimated Strehl ratio and simulations support the assumption that residual adaptive optics aberrations are the main limit to the reconstruction accuracy. We conclude that this sensor is ideally suited to remove post-adaptive optics noncommon path tip-tilt residuals. We discuss the next steps for concept development, including optimization of the lens and the fiber, tuning of the correction algorithm, and selection of optimal science cases

    A many-core CPU prototype of an MCAO and LTAO RTC for ELT-scale instruments

    Get PDF
    We propose a many-core CPU architecture for Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control (RTC) for the multi-conjugate AO (MCAO) and laser-tomographic AO (LTAO) modes. MCAO and LTAO differ from the more conventional single-conjugate (SCAO) mode by requiring more wavefront sensor (WFS) measurements and more deformable mirrors to achieve a wider field of correction, further increasing the computational requirements of ELT-scale AO. We demonstrate results of our CPU based AO RTC operating firstly in SCAO mode, using either Shack-Hartmann or Pyramid style WFS processing, and then in MCAO mode and in LTAO mode using the specifications of the proposed ELT instruments, MAORY and HARMONI. All results are gathered using a CPU based camera simulator utilising UDP packets to better demonstrate the pixel streaming and pipe-lining of the RTC software. We demonstrate the effects of switching parameters, streaming telemetry and implicit pseudo open-loop control (POLC) computation on the MCAO and LTAO modes. We achieve results of <600μs latency with an ELT scale SCAO setup using Shack-Hartman processing and <800μs latency with SCAO Pyramid WFS processing. We show that our MCAO and LTAO many core CPU architecture can achieve full system latencies of <1000μs with jitters <40μs RMS. We find that a CPU based AO RTC architecture has a good combination of performance, flexibility and maintainability for ELT-scale AO systems

    Automated wind velocity profiling from adaptive optics telemetry

    Get PDF
    Ground-based adaptive optics (AO) systems can use temporal control techniques to greatly improve image resolution. A measure of wind velocity as a function of altitude is needed to minimize the temporal errors associated with these systems. Spatio-temporal analysis of AO telemetry can express the wind velocity profile using the SLODAR technique. However, the limited altitude-resolution of current AO systems makes it difficult to disentangle the movement of independent layers. It is therefore a challenge to create an algorithm that can recover the wind velocity profile through SLODAR data analysis. In this study we introduce a novel technique for automated wind velocity profiling from AO telemetry. Simulated and on-sky centroid data from CANARY - an AO testbed on the 4.2 m William Herschel telescope, La Palma - is used to demonstrate the proficiency of the technique. Wind velocity profiles measured on-sky are compared to contemporaneous measurements from Stereo-SCIDAR, a dedicated high-resolution atmospheric profiler. They are also compared to European centre for medium-range weather forecasts. The software package that we developed to complete this study is open source

    Point spread function reconstruction validated using on-sky CANARY data in multiobject adaptive optics mode

    No full text
    International audienceIn preparation of future multiobject spectrographs (MOS) whose one of the major role is to provide an extensive statistical studies of high redshifted galaxies surveyed, the demonstrator CANARY has been designed to tackle technical challenges related to open-loop adaptive optics (AO) control with jointed Natural Guide Star and Laser Guide Star tomography. We have developed a point spread function (PSF) reconstruction algorithm dedicated to multiobject adaptive optics systems using system telemetry to estimate the PSF potentially anywhere in the observed field, a prerequisite to postprocess AO-corrected observations in integral field spectroscopy. We show how to handle off-axis data to estimate the PSF using atmospheric tomography and compare it to a classical approach that uses on-axis residual phase from a truth sensor observing a natural bright source. We have reconstructed over 450 on-sky CANARY PSFs and we get bias/1-sigma standard-deviation (std) of 1.3/4.8 on the H-band Strehl ratio (SR) with 92.3% of correlation between reconstructed and sky SR. On the full-width at half-maximum, we get, respectively, 2.94 mas, 19.9 mas, and 88.3% for the bias, std, and correlation. The reference method achieves 0.4/3.5/95% on the SR and 2.71 mas/14.9 mas/92.5% on the FWHM for the bias/std/correlation
    corecore