Observations from ground based telescopes are affected by the presence of the
Earth atmosphere, which severely perturbs them. The use of adaptive optics
techniques has allowed us to partly beat this limitation. However, image
selection or post-facto image reconstruction methods applied to bursts of
short-exposure images are routinely needed to reach the diffraction limit. Deep
learning has been recently proposed as an efficient way to accelerate these
image reconstructions. Currently, these deep neural networks are trained with
supervision, so that either standard deconvolution algorithms need to be
applied a-priori or complex simulations of the solar magneto-convection need to
be carried out to generate the training sets. Our aim here is to propose a
general unsupervised training scheme that allows multiframe blind deconvolution
deep learning systems to be trained simply with observations. The approach can
be applied for the correction of point-like as well as extended objects.
Leveraging the linear image formation theory and a probabilistic approach to
the blind deconvolution problem produces a physically-motivated loss function.
The optimization of this loss function allows an end-to-end training of a
machine learning model composed of three neural networks. As examples, we apply
this procedure to the deconvolution of stellar data from the FastCam instrument
and to solar extended data from the Swedish Solar Telescope. The analysis
demonstrates that the proposed neural model can be successfully trained without
supervision using observations only. It provides estimations of the
instantaneous wavefronts, from which a corrected image can be found using
standard deconvolution technniques. The network model is roughly three orders
of magnitude faster than applying standard deconvolution based on optimization
and shows potential to be used on real-time at the telescope.Comment: 11 pages, 4 figures, accepted for publication in A&