1,046 research outputs found

    Development of advanced control strategies for Adaptive Optics systems

    Get PDF
    Atmospheric turbulence is a fast disturbance that requires high control frequency. At the same time, celestial objects are faint sources of light and thus WFSs often work in a low photon count regime. These two conditions require a trade-off between high closed-loop control frequency to improve the disturbance rejection performance, and large WFS exposure time to gather enough photons for the integrated signal to increase the Signal-to-Noise ratio (SNR), making the control a delicate yet fundamental aspect for AO systems. The AO plant and atmospheric turbulence were formalized as state-space linear time-invariant systems. The full AO system model is the ground upon which a model-based control can be designed. A Shack-Hartmann wavefront sensor was used to measure the horizontal atmospheric turbulence. The experimental measurements yielded to the Cn2 atmospheric structure parameter, which is key to describe the turbulence statistics, and the Zernike terms time-series. Experimental validation shows that the centroid extraction algorithm implemented on the Jetson GPU outperforms (i.e. is faster) than the CPU implementation on the same hardware. In fact, due to the construction of the Shack-Hartmann wavefront sensor, the intensity image captured from its camera is partitioned into several sub-images, each related to a point of the incoming wavefront. Such sub-images are independent each-other and can be computed concurrently. The AO model is exploited to automatically design an advanced linear-quadratic Gaussian controller with integral action. Experimental evidence shows that the system augmentation approach outperforms the simple integrator and the integrator filtered with the Kalman predictor, and that it requires less parameters to tune

    Spatio-angular Minimum-variance Tomographic Controller for Multi-Object Adaptive Optics systems

    Full text link
    Multi-object astronomical adaptive-optics (MOAO) is now a mature wide-field observation mode to enlarge the adaptive-optics-corrected field in a few specific locations over tens of arc-minutes. The work-scope provided by open-loop tomography and pupil conjugation is amenable to a spatio-angular Linear-Quadratic Gaussian (SA-LQG) formulation aiming to provide enhanced correction across the field with improved performance over static reconstruction methods and less stringent computational complexity scaling laws. Starting from our previous work [1], we use stochastic time-progression models coupled to approximate sparse measurement operators to outline a suitable SA-LQG formulation capable of delivering near optimal correction. Under the spatio-angular framework the wave-fronts are never explicitly estimated in the volume,providing considerable computational savings on 10m-class telescopes and beyond. We find that for Raven, a 10m-class MOAO system with two science channels, the SA-LQG improves the limiting magnitude by two stellar magnitudes when both Strehl-ratio and Ensquared-energy are used as figures of merit. The sky-coverage is therefore improved by a factor of 5.Comment: 30 pages, 7 figures, submitted to Applied Optic

    Computational Methods and Graphical Processing Units for Real-time Control of Tomographic Adaptive Optics on Extremely Large Telescopes.

    Get PDF
    Ground based optical telescopes suffer from limited imaging resolution as a result of the effects of atmospheric turbulence on the incoming light. Adaptive optics technology has so far been very successful in correcting these effects, providing nearly diffraction limited images. Extremely Large Telescopes will require more complex Adaptive Optics configurations that introduce the need for new mathematical models and optimal solvers. In addition, the amount of data to be processed in real time is also greatly increased, making the use of conventional computational methods and hardware inefficient, which motivates the study of advanced computational algorithms, and implementations on parallel processors. Graphical Processing Units (GPUs) are massively parallel processors that have so far demonstrated a very high increase in speed compared to CPUs and other devices, and they have a high potential to meet the real-time restrictions of adaptive optics systems. This thesis focuses on the study and evaluation of existing proposed computational algorithms with respect to computational performance, and their implementation on GPUs. Two basic methods, one direct and one iterative are implemented and tested and the results presented provide an evaluation of the basic concept upon which other algorithms are based, and demonstrate the benefits of using GPUs for adaptive optics

    SimCADO - an instrument data simulator package for MICADO at the E-ELT

    Full text link
    MICADO will be the first-light wide-field imager for the European Extremely Large Telescope (E-ELT) and will provide difiraction limited imaging (7mas at 1.2mm) over a ~53 arcsecond field of view. In order to support various consortium activities we have developed a first version of SimCADO: an instrument simulator for MICADO. SimCADO uses the results of the detailed simulation efforts conducted for each of the separate consortium-internal work packages in order to generate a model of the optical path from source to detector readout. SimCADO is thus a tool to provide scientific context to both the science and instrument development teams who are ultimately responsible for the final design and future capabilities of the MICADO instrument. Here we present an overview of the inner workings of SimCADO and outline our plan for its further development.Comment: to appear in Ground-based and Airborne Instrumentation for Astronomy VI, eds. Evans C., Simard L., Takami H., Proc. SPIE vol. 9908 id 73; 201

    Multi-threaded parallel simulation of non-local non-linear problems in ultrashort laser pulse propagation in the presence of plasma

    Get PDF
    We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor

    Laser Guide Star Only Adaptive Optics: The Development of Tools and Algorithms for the Determination of Laser Guide Star Tip-Tilt

    Get PDF
    Adaptive Optics (AO) is a technology which corrects for the effects of the atmosphere and so improves the optical quality of ground based astronomical observations. The bright “guide stars” required for correction are not available across the entire sky, so Laser Guide Stars (LGSs) are created. A Natural Guide Star (NGS) is still required to correct for tip-tilt as the LGS encounters turbulence on the uplink path resulting in unpredictable “jitter”, hence limiting corrected sky coverage. In this thesis an original method is proposed and investigated that promises to improve the correction performance for tomographic AO systems using only LGSs, and no NGS, by retrieving the LGS uplink tip-tilt. To investigate the viability of this method, two unique tools have been developed. A new AO simulation has been written in the Python programming language which has been designed to facilitate the rapid development of new AO concepts. It features realistic LGS simulation, ideal to test the method of LGS uplink tip-tilt retrieval. The Durham Real-Time Adaptive Optics Generalised Optical Nexus (DRAGON) is a laboratory AO test bench nearing completion, which features multiple LGS and NGS Wavefront Sensors (WFSs) intended to further improve tomographic AO. A novel method of LGS emulation has been designed, which re-creates focus anisoplanatism, elongation and uplink turbulence. Once complete, DRAGON will be the ideal test bench for further development of LGS uplink tip-tilt retrieval. Performance estimates from simulation of the LGS uplink tip-tilt retrieval method are presented. Performance is improved over tomographic LGS AO systems which do not correct for tip-tilt, giving a modest improvement in image quality over the entire night sky. Correction performance is found to be dependent on the atmospheric turbulence profile. If combined with ground layer adaptive optics, higher correction performance with a very high sky coverage may be achieved

    Multi time-step wavefront reconstruction for tomographic adaptive-optics systems

    Get PDF
    In tomographic adaptive-optics (AO) systems, errors due to tomographic wavefront reconstruction limit the performance and angular size of the scientific field of view (FoV), where AO correction is effective. We propose a multi time-step tomographic wavefront reconstruction method to reduce the tomographic error by using measurements from both the current and previous time steps simultaneously. We further outline the method to feed the reconstructor with both wind speed and direction of each turbulence layer. An end-to-end numerical simulation, assuming a multi-object AO (MOAO) system on a 30 m aperture telescope, shows that the multi time-step reconstruction increases the Strehl ratio (SR) over a scientific FoV of 10 arc min in diameter by a factor of 1.5–1.8 when compared to the classical tomographic reconstructor, depending on the guide star asterism and with perfect knowledge of wind speeds and directions. We also evaluate the multi time-step reconstruction method and the wind estimation method on the RAVEN demonstrator under laboratory setting conditions. The wind speeds and directions at multiple atmospheric layers are measured successfully in the laboratory experiment by our wind estimation method with errors below 2  ms^(−1). With these wind estimates, the multi time-step reconstructor increases the SR value by a factor of 1.2–1.5, which is consistent with a prediction from the end-to-end numerical simulation

    System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging

    Get PDF
    In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies
    corecore