6 research outputs found

    Optimizing the Temporal and Spatial Resolutions and Light Throughput of Fresnel Incoherent Correlation Holography in the Framework of Coded Aperture Imaging

    Full text link
    Fresnel incoherent correlation holography (FINCH) is a well-established digital holography technique for 3D imaging of objects illuminated by spatially incoherent light. FINCH has a higher lateral resolution of 1.5 times that of direct imaging systems with the same numerical aperture. However, the other imaging characteristics of FINCH such as axial resolution, temporal resolution, light throughput and signal to noise ratio (SNR) are lower than those of direct imaging system. Different techniques were developed by researchers around the world to improve the imaging characteristics of FINCH while retaining the inherent higher lateral resolution of FINCH. However, most of the solutions developed to improve FINCH presented additional challenges. In this study, we optimized FINCH in the framework of coded aperture imaging. Two recently developed computational methods such as transport of amplitude into phase based on Gerchberg Saxton algorithm (TAP-GSA) and Lucy-Richardson-Rosen algorithm were applied to improve light throughput and image reconstruction respectively. The above implementation improved the axial resolution, time resolution and SNR of FINCH close to those of direct imaging while retaining the high lateral resolution. A point spread function (PSF) engineering technique has been implemented to prevent the low lateral resolution problem associated with the PSF recorded using pinholes with a large diameter. We believe that the above developments are beyond the state-of-the-art of existing FINCH-scopes.Comment: 13 pages, 9 figure

    Implementation of a Large-Area Diffractive Lens Using Multiple Sub-Aperture Diffractive Lenses and Computational Reconstruction

    No full text
    Direct imaging systems that create an image of an object directly on the sensor in a single step are prone to many constraints, as a perfect image is required to be recorded within this step. In designing high resolution direct imaging systems with a diffractive lens, the outermost zone width either reaches the lithography limit or the diffraction limit itself, imposing challenges in fabrication. However, if the imaging mode is switched to an indirect one consisting of multiple steps to complete imaging, then different possibilities open. One such method is the widely used indirect imaging method with Golay configuration telescopes. In this study, a Golay-like configuration has been adapted to realize a large-area diffractive lens with three sub-aperture diffractive lenses. The sub-aperture diffractive lenses are not required to collect light and focus them to a single point as in a direct imaging system, but to focus independently on different points within the sensor area. This approach of a Large-Area Diffractive lens with Integrated Sub-Apertures (LADISA) relaxes the fabrication constraints and allows the sub-aperture diffractive elements to have a larger outermost zone width and a smaller area. The diffractive sub-apertures were manufactured using photolithography. The fabricated diffractive element was implemented in indirect imaging mode using non-linear reconstruction and the Lucy–Richardson–Rosen algorithm with synthesized point spread functions. The computational optical experiments revealed improved optical and computational imaging resolutions compared to previous studies

    Single-Shot 3D Incoherent Imaging Using Deterministic and Random Optical Fields with Lucy–Richardson–Rosen Algorithm

    No full text
    Coded aperture 3D imaging techniques have been rapidly evolving in recent years. The two main directions of evolution are in aperture engineering to generate the optimal optical field and in the development of a computational reconstruction method to reconstruct the object’s image from the intensity distribution with minimal noise. The goal is to find the ideal aperture–reconstruction method pair, and if not that, to optimize one to match the other for designing an imaging system with the required 3D imaging characteristics. The Lucy–Richardson–Rosen algorithm (LR2A), a recently developed computational reconstruction method, was found to perform better than its predecessors, such as matched filter, inverse filter, phase-only filter, Lucy–Richardson algorithm, and non-linear reconstruction (NLR), for certain apertures when the point spread function (PSF) is a real and symmetric function. For other cases of PSF, NLR performed better than the rest of the methods. In this tutorial, LR2A has been presented as a generalized approach for any optical field when the PSF is known along with MATLAB codes for reconstruction. The common problems and pitfalls in using LR2A have been discussed. Simulation and experimental studies for common optical fields such as spherical, Bessel, vortex beams, and exotic optical fields such as Airy, scattered, and self-rotating beams have been presented. From this study, it can be seen that it is possible to transfer the 3D imaging characteristics from non-imaging-type exotic fields to indirect imaging systems faithfully using LR2A. The application of LR2A to medical images such as colonoscopy images and cone beam computed tomography images with synthetic PSF has been demonstrated. We believe that the tutorial will provide a deeper understanding of computational reconstruction using LR2A

    Deep Deconvolution of Object Information Modulated by a Refractive Lens Using Lucy-Richardson-Rosen Algorithm

    No full text
    A refractive lens is one of the simplest, most cost-effective and easily available imaging elements. Given a spatially incoherent illumination, a refractive lens can faithfully map every object point to an image point in the sensor plane, when the object and image distances satisfy the imaging conditions. However, static imaging is limited to the depth of focus, beyond which the point-to-point mapping can only be obtained by changing either the location of the lens, object or the imaging sensor. In this study, the depth of focus of a refractive lens in static mode has been expanded using a recently developed computational reconstruction method, Lucy-Richardson-Rosen algorithm (LRRA). The imaging process consists of three steps. In the first step, point spread functions (PSFs) were recorded along different depths and stored in the computer as PSF library. In the next step, the object intensity distribution was recorded. The LRRA was then applied to deconvolve the object information from the recorded intensity distributions during the final step. The results of LRRA were compared with two well-known reconstruction methods, namely the Lucy-Richardson algorithm and non-linear reconstruction

    Deep Deconvolution of Object Information Modulated by a Refractive Lens Using Lucy-Richardson-Rosen Algorithm

    No full text
    A refractive lens is one of the simplest, most cost-effective and easily available imaging elements. Given a spatially incoherent illumination, a refractive lens can faithfully map every object point to an image point in the sensor plane, when the object and image distances satisfy the imaging conditions. However, static imaging is limited to the depth of focus, beyond which the point-to-point mapping can only be obtained by changing either the location of the lens, object or the imaging sensor. In this study, the depth of focus of a refractive lens in static mode has been expanded using a recently developed computational reconstruction method, Lucy-Richardson-Rosen algorithm (LRRA). The imaging process consists of three steps. In the first step, point spread functions (PSFs) were recorded along different depths and stored in the computer as PSF library. In the next step, the object intensity distribution was recorded. The LRRA was then applied to deconvolve the object information from the recorded intensity distributions during the final step. The results of LRRA were compared with two well-known reconstruction methods, namely the Lucy-Richardson algorithm and non-linear reconstruction

    Nonlinear Reconstruction of Images from Patterns Generated by Deterministic or Random Optical Masks—Concepts and Review of Research

    No full text
    Indirect-imaging methods involve at least two steps, namely optical recording and computational reconstruction. The optical-recording process uses an optical modulator that transforms the light from the object into a typical intensity distribution. This distribution is numerically processed to reconstruct the object’s image corresponding to different spatial and spectral dimensions. There have been numerous optical-modulation functions and reconstruction methods developed in the past few years for different applications. In most cases, a compatible pair of the optical-modulation function and reconstruction method gives optimal performance. A new reconstruction method, termed nonlinear reconstruction (NLR), was developed in 2017 to reconstruct the object image in the case of optical-scattering modulators. Over the years, it has been revealed that the NLR can reconstruct an object’s image modulated by an axicons, bifocal lenses and even exotic spiral diffractive elements, which generate deterministic optical fields. Apparently, NLR seems to be a universal reconstruction method for indirect imaging. In this review, the performance of NLR isinvestigated for many deterministic and stochastic optical fields. Simulation and experimental results for different cases are presented and discussed
    corecore