49,556 research outputs found

    An Extended Virtual Aperture Imaging Model for Through-the-wall Sensing and Its Environmental Parameters Estimation

    Get PDF
    Through-the-wall imaging (TWI) radar has been given increasing attention in recent years. However, prior knowledge about environmental parameters, such as wall thickness and dielectric constant, and the standoff distance between an array and a wall, is generally unavailable in real applications. Thus, targets behind the wall suffer from defocusing and displacement under the conventional imag¬ing operations. To solve this problem, in this paper, we first set up an extended imaging model of a virtual aperture obtained by a multiple-input-multiple-output array, which considers the array position to the wall and thus is more applicable for real situations. Then, we present a method to estimate the environmental parameters to calibrate the TWI, without multiple measurements or dominant scatter¬ers behind-the-wall to assist. Simulation and field experi¬ments were performed to illustrate the validity of the pro¬posed imaging model and the environmental parameters estimation method

    Deep learning approach to Fourier ptychographic microscopy

    Full text link
    Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequence of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by this large spatial ensemble so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ~6X. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution

    Deep learning approach to Fourier ptychographic microscopy

    Full text link
    Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program. (NVIDIA Corporation; GeForce Titan Xp through the GPU Grant Program)First author draf

    Imaging With Nature: Compressive Imaging Using a Multiply Scattering Medium

    Get PDF
    The recent theory of compressive sensing leverages upon the structure of signals to acquire them with much fewer measurements than was previously thought necessary, and certainly well below the traditional Nyquist-Shannon sampling rate. However, most implementations developed to take advantage of this framework revolve around controlling the measurements with carefully engineered material or acquisition sequences. Instead, we use the natural randomness of wave propagation through multiply scattering media as an optimal and instantaneous compressive imaging mechanism. Waves reflected from an object are detected after propagation through a well-characterized complex medium. Each local measurement thus contains global information about the object, yielding a purely analog compressive sensing method. We experimentally demonstrate the effectiveness of the proposed approach for optical imaging by using a 300-micrometer thick layer of white paint as the compressive imaging device. Scattering media are thus promising candidates for designing efficient and compact compressive imagers.Comment: 17 pages, 8 figure

    Orbital Angular Momentum Waves: Generation, Detection and Emerging Applications

    Full text link
    Orbital angular momentum (OAM) has aroused a widespread interest in many fields, especially in telecommunications due to its potential for unleashing new capacity in the severely congested spectrum of commercial communication systems. Beams carrying OAM have a helical phase front and a field strength with a singularity along the axial center, which can be used for information transmission, imaging and particle manipulation. The number of orthogonal OAM modes in a single beam is theoretically infinite and each mode is an element of a complete orthogonal basis that can be employed for multiplexing different signals, thus greatly improving the spectrum efficiency. In this paper, we comprehensively summarize and compare the methods for generation and detection of optical OAM, radio OAM and acoustic OAM. Then, we represent the applications and technical challenges of OAM in communications, including free-space optical communications, optical fiber communications, radio communications and acoustic communications. To complete our survey, we also discuss the state of art of particle manipulation and target imaging with OAM beams
    corecore