32 research outputs found

    Unified treatment of exact and approximate scalar electromagnetic wave scattering

    No full text
    Under conditions of strong scattering, a dilemma often arises regarding the best numerical method to use. Main competitors are the Born series, the Beam Propagation Method, and direct solution of the Lippmann-Schwinger equation. However, analytical relationships between the three methods have not yet, to our knowledge, been explicitly stated. Here, we bridge this gap in the literature. In addition to overall insight about aspects of optical scattering that are best numerically captured by each method, our approach allows us to derive approximate error bounds to be expected under various scattering conditions

    Spectral pre-modulation of training examples enhances the spatial resolution of the Phase Extraction Neural Network (PhENN)

    No full text
    The Phase Extraction Neural Network (PhENN) is a computational architecture, based on deep machine learning, for lens-less quantitative phase retrieval from raw intensity data. PhENN is a deep convolutional neural network trained through examples consisting of pairs of true phase objects and their corresponding intensity diffraction patterns; thereafter, given a test raw intensity pattern PhENN is capable of reconstructing the original phase object robustly, in many cases even for objects outside the database where the training examples were drawn from. Here, we show that the spatial frequency content of the training examples is an important factor limiting PhENN's spatial frequency response. For example, if the training database is relatively sparse in high spatial frequencies, as most natural scenes are, PhENN's ability to resolve fine spatial features in test patterns will be correspondingly limited. To combat this issue, we propose "flattening" the power spectral density of the training examples before presenting them to PhENN. For phase objects following the statistics of natural scenes, we demonstrate experimentally that the spectral pre-modulation method enhances the spatial resolution of PhENN by a factor of 2

    Media 1: Localized propagation modes guided by shear discontinuities in photonic crystals

    No full text
    Originally published in Optics Express on 30 October 2006 (oe-14-22-10887

    Limited-angle tomographic reconstruction of dense layered objects by dynamical machine learning

    No full text
    Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem with practical implications in medical and biological imaging, manufacturing, automation, and environmental and food security. Regularizing priors are necessary to reduce artifacts by improving the condition of such problems. Recently, it was shown that one effective way to learn the priors for strongly scattering yet highly structured 3D objects, e.g. layered and Manhattan, is by a static neural network [Goy et al, Proc. Natl. Acad. Sci. 116, 19848-19856 (2019)]. Here, we present a radically different approach where the collection of raw images from multiple angles is viewed analogously to a dynamical system driven by the object-dependent forward scattering operator. The sequence index in angle of illumination plays the role of discrete time in the dynamical system analogy. Thus, the imaging problem turns into a problem of nonlinear system identification, which also suggests dynamical learning as better fit to regularize the reconstructions. We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the fundamental building block. Through comprehensive comparison of several quantitative metrics, we show that the dynamic method improves upon previous static approaches with fewer artifacts and better overall reconstruction fidelity

    Low Photon Count Phase Retrieval Using Deep Learning

    No full text
    Imaging systems' performance at low light intensity is affected by shot noise, which becomes increasingly strong as the power of the light source decreases. In this paper we experimentally demonstrate the use of deep neural networks to recover objects illuminated with weak light and demonstrate better performance than with the classical Gerchberg-Saxton phase retrieval algorithm for equivalent signal over noise ratio. Prior knowledge about the object is implicitly contained in the training data set and feature detection is possible for a signal over noise ratio close to one. We apply this principle to a phase retrieval problem and show successful recovery of the object's most salient features with as little as one photon per detector pixel on average in the illumination beam. We also show that the phase reconstruction is significantly improved by training the neural network with an initial estimate of the object, as opposed as training it with the raw intensity measurement

    Media 3: Wigner functions defined with Laplace transform kernels

    No full text
    Originally published in Optics Express on 24 October 2011 (oe-19-22-21938

    Media 1: Phase from chromatic aberrations

    No full text
    Originally published in Optics Express on 25 October 2010 (oe-18-22-22817
    corecore