Imaging systems' performance at low light intensity is affected by shot
noise, which becomes increasingly strong as the power of the light source
decreases. In this paper we experimentally demonstrate the use of deep neural
networks to recover objects illuminated with weak light and demonstrate better
performance than with the classical Gerchberg-Saxton phase retrieval algorithm
for equivalent signal over noise ratio. Prior knowledge about the object is
implicitly contained in the training data set and feature detection is possible
for a signal over noise ratio close to one. We apply this principle to a phase
retrieval problem and show successful recovery of the object's most salient
features with as little as one photon per detector pixel on average in the
illumination beam. We also show that the phase reconstruction is significantly
improved by training the neural network with an initial estimate of the object,
as opposed as training it with the raw intensity measurement.Comment: 8 pages, 5 figure